Garrison Lovely
@garrisonlovely.bsky.social
2.7K followers 230 following 850 posts
Writing a book on AI+economics+geopolitics for Nation Books. Covers: The Nation, Jacobin. Bylines: NYT, Nature, Bloomberg, BBC, Guardian, TIME, The Verge, Vox, Thomson Reuters Foundation, + others.
Posts Media Videos Starter Packs
garrisonlovely.bsky.social
Would have been nice if the interviewer asked him about these moves, but alas.
garrisonlovely.bsky.social
AND more likely than not to be developed during Trump's presidency.

OpenAI execs are teaming up with Andreessen Horowitz to spin up a massive AI super PAC, modeled after the crypto playbook of intimidating congress into supporting its agenda. www.wsj.com/politics/si...
garrisonlovely.bsky.social
It's also openly called for the federal govt to preempt state-level AI regulations with no binding replacement — an unprecedented move and not something you'd expect from someone who thinks that superintelligence is the biggest threat to humanity
api.omarshehata.me/substack-pr...
garrisonlovely.bsky.social
But it's also hard to reconcile this view with the increasingly intense anti-regulation position OpenAI is taking. Altman clarifies that there's a diff between regs on x-risk and regs on banking software, but OAI opposed SB 1047, which was focused on catastrophic risk.
garrisonlovely.bsky.social
Well, now I need to update my book.

To my knowledge, this is the first time Sam Altman hasn't downplayed or dismissed AI existential risk since early 2023.

TBC, I think it's good of Altman to say this if that's what he actually believes...
x.com/ai_ctrl/sta...
garrisonlovely.bsky.social
I wrote about this in my Current Affairs essay on McKinsey: www.currentaffairs.org/news/2019/0...
garrisonlovely.bsky.social
This reminds me of arguments that McKinsey would make to justify working for Gulf autocracies. However, academic research has found that the opposite tends to happen: companies abandon human rights to conform to their wealthy clients.
x.com/ShakeelHash...
garrisonlovely.bsky.social
Oh and Daron said he was very excited for my book, so you should be too!
garrisonlovely.bsky.social
Daron and Sandhini were a pleasure to work with, as was everyone from the Nobel Foundation, the Swedish Consulate, and the Astralis Foundation.
garrisonlovely.bsky.social
Preparing for the conversation actually helped me crack the thesis of my book, so if nothing else comes of it, I'll still be grateful!

The event was not recorded, but we were lucky to have a stacked audience (e.g. another Nobel-winner for the discovery of mRNA was in attendance)
garrisonlovely.bsky.social
I got a chance to push each of them a bit. Daron on his skepticism of AI capabilities+pace of progress. Sandhini on the sufficiency of self-governance as AI grows more capable+ubiquitous.

I was surprised by each of them, and I'll have more to say in my book.
garrisonlovely.bsky.social
Last week, I had the incredible privilege of moderating a conversation on AI+geopolitics w/ Daron Acemoglu & Sandhini Agarwal (OpenAI trustworthy AI lead) for the Nobel Foundation at the Swedish Consulate.

It was lively+substantive (one audience member was pleasantly surprised)
x.com/swedennewyo...
garrisonlovely.bsky.social
(GPT-5 and Gemini 2.5 were far more useful.) Anyway, here are the links if you wanna try it for yourself openai.com/index/gdpval/
economics.mit.edu/sites/defau...
garrisonlovely.bsky.social
Funniest part of all this is that I was asking models how OpenAI's new GDPval results would affect Daron Acemoglu's GDP findings from his 2024 paper. GDPVal found Opus 4.1 was the clear winner at these tasks, but it gave me the clearly worst answer to my question
garrisonlovely.bsky.social
I covered SB 1047 full-time last year, but unfortunately haven't done much on SB 53. It's mainly a transparency bill, offering whistleblower protections to AI employees, requiring the largest AI developers to publish safety plans and report safety incidents.
garrisonlovely.bsky.social
and find that right balance so we can continue to dominate in this space, so we can continue to support this ecosystem, and at the same time address that peril and the concerns that legitimate people have"
garrisonlovely.bsky.social
We're not doing things to them, but we're not doing things necessarily for them. And we're trying to answer that question from a policy perspective...
garrisonlovely.bsky.social
and the new focus on just let it rip coming out of the White House, that David Sacks and this White House is promoting. And we have a bill -- forgive me, it's on my desk -- that we think strikes the right balance and we worked with industry, but we didn't submit to industry...