Garrison Lovely
@garrisonlovely.bsky.social
2.7K followers 230 following 850 posts
Writing a book on AI+economics+geopolitics for Nation Books. Covers: The Nation, Jacobin. Bylines: NYT, Nature, Bloomberg, BBC, Guardian, TIME, The Verge, Vox, Thomson Reuters Foundation, + others.
Posts Media Videos Starter Packs
garrisonlovely.bsky.social
If you want to talk, I'm on Signal at: Garrison.06

Props to Josh for speaking out and for taking seriously the gravity of what OpenAI is trying to do.
garrisonlovely.bsky.social
For instance, believing that OAI really does want regulation, but just wants it to happen at the federal level — despite the fact that OAI publicly called for preemption of state-level bills with no binding replacement, something that has literally never happened before.
garrisonlovely.bsky.social
Wow, OpenAI's head of mission alignment just spoke out against the way the company has been using subpoenas to intimidate and disrupt political opponents.

A surprising number of OAI rank & file have no idea what their leadership is doing to kill regulation.
x.com/jachiam0/st...
garrisonlovely.bsky.social
Would have been nice if the interviewer asked him about these moves, but alas.
garrisonlovely.bsky.social
AND more likely than not to be developed during Trump's presidency.

OpenAI execs are teaming up with Andreessen Horowitz to spin up a massive AI super PAC, modeled after the crypto playbook of intimidating congress into supporting its agenda. www.wsj.com/politics/si...
garrisonlovely.bsky.social
It's also openly called for the federal govt to preempt state-level AI regulations with no binding replacement — an unprecedented move and not something you'd expect from someone who thinks that superintelligence is the biggest threat to humanity
api.omarshehata.me/substack-pr...
garrisonlovely.bsky.social
But it's also hard to reconcile this view with the increasingly intense anti-regulation position OpenAI is taking. Altman clarifies that there's a diff between regs on x-risk and regs on banking software, but OAI opposed SB 1047, which was focused on catastrophic risk.
garrisonlovely.bsky.social
Well, now I need to update my book.

To my knowledge, this is the first time Sam Altman hasn't downplayed or dismissed AI existential risk since early 2023.

TBC, I think it's good of Altman to say this if that's what he actually believes...
x.com/ai_ctrl/sta...
garrisonlovely.bsky.social
I wrote about this in my Current Affairs essay on McKinsey: www.currentaffairs.org/news/2019/0...
garrisonlovely.bsky.social
This reminds me of arguments that McKinsey would make to justify working for Gulf autocracies. However, academic research has found that the opposite tends to happen: companies abandon human rights to conform to their wealthy clients.
x.com/ShakeelHash...
garrisonlovely.bsky.social
Oh and Daron said he was very excited for my book, so you should be too!
garrisonlovely.bsky.social
Daron and Sandhini were a pleasure to work with, as was everyone from the Nobel Foundation, the Swedish Consulate, and the Astralis Foundation.
garrisonlovely.bsky.social
Preparing for the conversation actually helped me crack the thesis of my book, so if nothing else comes of it, I'll still be grateful!

The event was not recorded, but we were lucky to have a stacked audience (e.g. another Nobel-winner for the discovery of mRNA was in attendance)
garrisonlovely.bsky.social
I got a chance to push each of them a bit. Daron on his skepticism of AI capabilities+pace of progress. Sandhini on the sufficiency of self-governance as AI grows more capable+ubiquitous.

I was surprised by each of them, and I'll have more to say in my book.
garrisonlovely.bsky.social
Last week, I had the incredible privilege of moderating a conversation on AI+geopolitics w/ Daron Acemoglu & Sandhini Agarwal (OpenAI trustworthy AI lead) for the Nobel Foundation at the Swedish Consulate.

It was lively+substantive (one audience member was pleasantly surprised)
x.com/swedennewyo...
garrisonlovely.bsky.social
(GPT-5 and Gemini 2.5 were far more useful.) Anyway, here are the links if you wanna try it for yourself openai.com/index/gdpval/
economics.mit.edu/sites/defau...
garrisonlovely.bsky.social
Funniest part of all this is that I was asking models how OpenAI's new GDPval results would affect Daron Acemoglu's GDP findings from his 2024 paper. GDPVal found Opus 4.1 was the clear winner at these tasks, but it gave me the clearly worst answer to my question