Seth Lazar
banner
sethlazar.org
Seth Lazar
@sethlazar.org
Philosopher working on normative dimensions of computing and sociotechnical AI safety.

Lab: https://mintresearch.org
Self: https://sethlazar.org
Newsletter: https://philosophyofcomputing.substack.com
Pinned
News and opportunities for philosophers working on normative questions raised by computing:

philosophyofcomputing.substack.com
Normative Philosophy of Computing Newsletter | Seth Lazar | Substack
News and opportunities for anyone interested in analytic philosophy on normative questions raised by computing, from aesthetics to AI ethics. Click to read Normative Philosophy of Computing Newsletter...
philosophyofcomputing.substack.com
What do you think of the definition in the paper?
September 11, 2025 at 6:16 AM
Reposted by Seth Lazar
In a new paper in our AI & Democratic Freedoms series, Rachel M. Kim, Blaine Kuehnert, @sethlazar.org, Ranjit Singh, & Hoda Heidari propose creating an AI Power Disparity Index, designed to measure and signal the changing distribution of power in the AI ecosystem. knightcolumbia.org/content/the-...
The AI Power Disparity Index: Toward a Compound Measure of AI Actors’ Power to Shape the AI Ecosystem
knightcolumbia.org
September 8, 2025 at 2:44 PM
And there's a great paper by @_FelixSimon_ and Sacha Altay on the actual impact of generative AI on elections. And more that I'll be writing about later, (incl great art from Seb Krier).
Read them all at buff.ly/zTrRirO. Thanks so much to @knightcolumbia (and especially Katy) for making it happen
September 5, 2025 at 4:21 PM
My symposium on AI & Democratic Freedoms (edited with @katygb.bsky.social) is shaping up amazingly. @random_walker and @sayashk 's already influential 'AI as Normal Technology' is there. So is @danielsusskind's insightful investigation of what will remain for humans to do in our automated future.
September 5, 2025 at 4:21 PM
Building democratic resilience for the era of AI agents, and for AGI beyond, is an urgent challenge. If you're building civic agents, I'd love to talk.

The paper—written with the visionary Mariano-Florentino Cuéllar—is published here: buff.ly/dMM0r7K
September 5, 2025 at 4:21 PM
More important still, if we want to preserve democratic values in this radical period of transition, is to make democratic institutions more resilient to the changes ahead. This can't just be about going back to how things were. Their stressors are not all exogenous.
September 5, 2025 at 4:21 PM
But computing has always been janus-faced for democracy, and this time won't be different. We could build civic agents that advance democratic values and disrupt concentrated power. Some features of modern AI could help. But civic agents must be built, they're not the default.
September 5, 2025 at 4:21 PM
And there will be novel threats—the erosion of cognitive autonomy, accelerated cyberwarfare, the ability for executives to wear the administrative state like a mech suit that implements without question their every whim.
September 5, 2025 at 4:21 PM
Soaring inequality? Check.
Concentration of corporate power? Also check.
Stuffed-up information ecosystem? Yep that too.
Backsliding as the 'autocratic legalism' playbook gets rolled out in one nation after the next? Agents could be a helpful software Stasi.
September 5, 2025 at 4:21 PM
Capable LLM-based agents are already here. For any domain where you can build a good RL verifier, current knowledge will get us to human-level performance and better. Further advances are on the horizon. Agents are bound to exacerbate the trends already stressing democracies.
September 5, 2025 at 4:21 PM
How will AI agents impact democratic values? Democracies are—for independent reasons—already under acute pressure. Since WWII Moore's Law and democratisation went up and to the right in lockstep. Not any more.
September 5, 2025 at 4:21 PM
Reposted by Seth Lazar
In the latest essay in our AI & Democratic Freedoms series, @sethlazar.org and Tino Cuéllar (@carnegieendowment.org)
discuss how AI agents might affect the realization of democratic values. knightcolumbia.org/content/ai-a...
AI Agents and Democratic Resilience
knightcolumbia.org
September 4, 2025 at 7:35 PM
Reposted by Seth Lazar
"Democracies are weaker than they have been for decades," write Carnegie president Mariano-Florentino Cuéllar and @sethlazar.org for @knightcolumbia.org. "A great wave is coming, and they are ill-prepared."

AI agents could help or hurt. And they won't protect democratic values on their own.
September 4, 2025 at 8:14 PM
@caseynewton.bsky.social in re an old discussion about AI denialists. , hope you’ve caught knightcolumbia.org/events/artif...
Artificial Intelligence and Democratic Freedoms
knightcolumbia.org
April 11, 2025 at 3:59 PM
Reposted by Seth Lazar
🚨 UPCOMING EVENT: Artificial Intelligence and Democratic Freedoms, April 10-11 at @columbiauniversity.bsky.social & online. In collaboration with Senior AI Advisor @sethlazar.org & co-sponsored by the Knight Institute and @columbiaseas.bsky.social. RSVP: knightcolumbia.org/events/artif...
February 28, 2025 at 4:38 PM
New Philosophy of Computing newsletter: share with your philosophy friends. Lots of CFPs, events, opportunities, new papers.

philosophyofcomputing.substack.com/p/normative-...
Normative Philosophy of Computing Newsletter
Welcome to February!
philosophyofcomputing.substack.com
February 25, 2025 at 5:06 AM
Reposted by Seth Lazar
I am a bit bashful about sharing this profile www.thetimes.com/uk/technolog... of me in @thetimes.com, but will do so because it kindly refers to my new book which is coming out in early March. www.penguin.co.uk/books/460891.... The tech titans pictured seem to be decoration (and not my co-authors)
These Strange New Minds
Stunning advances in digital technology have given us a new wave of disarmingly human-like AI systems. The march of this new technology is set to upturn our economies, challenge our democracies, and r...
www.penguin.co.uk
February 22, 2025 at 2:41 PM
Reposted by Seth Lazar
I spent a few hours with OpenAI's Operator automating expense reports. Most corporate jobs require filing expenses, so Operator could save *millions* of person-hours every year if it gets this right.

Some insights on what worked, what broke, and why this matters for the future of agents 🧵
February 3, 2025 at 6:04 PM
Since Agents are now on everyone's minds, do check out this tutorial on the ethics of Language Model Agents, from June last year.

Looks at what 'agent' means, how LM agents work, what kinds of impacts we should expect, and what norms (and regulations) should govern them.
LM Agents: Prospects and Impacts (FAccT tutorial)
YouTube video by Seth Lazar
www.youtube.com
January 24, 2025 at 7:29 AM
Reposted by Seth Lazar
We're excited to announce that our upcoming symposium on #AI and democracy w/ @sethlazar.org (4/10-4/11, at @columbiauniversity.bsky.social & online) will feature papers by a highly accomplished group of authors from a wide range of disciplines. Check them out: knightcolumbia.org/blog/knight-...
Knight Institute Symposium on AI and Democratic Freedoms to Feature Leading Scholars and Technologists
knightcolumbia.org
January 23, 2025 at 3:03 PM
January update from the normative philosophy of computing newsletter: new CFPs, papers, workshops, and resources for philosophers working on normative questions raised by AI and computing.
Normative Philosophy of Computing - January
Happy New Year!
mintresearch.org
January 16, 2025 at 6:48 AM
Reposted by Seth Lazar
EVENT: Artificial Intelligence and Democratic Freedoms, 4/10-11, at @columbiauniversity.bsky.social & online. We're hosting a symposium w/ @sethlazar.org exploring the risks advanced #AI systems pose to democratic freedoms and interventions to mitigate them. RSVP: knightcolumbia.org/events/artif...
Artificial Intelligence and Democratic Freedoms
knightcolumbia.org
January 9, 2025 at 9:08 PM
Reposted by Seth Lazar
📢 Excited to share: I'm again leading the efforts for the Responsible AI chapter for Stanford's 2025 AI Index, curated by @stanfordhai.bsky.social. As last year, we're asking you to submit your favorite papers on the topic for consideration (including your own!) 🧵 1/
January 5, 2025 at 5:42 PM
Reposted by Seth Lazar
Turns out we weren't done for major LLM releases in 2024 after all... Alibaba's Qwen just released QvQ, a "visual reasoning model" - the same chain-of-thought trick as OpenAI's o1 applied to running a prompt against an image

Trying it out is a lot of fun: simonwillison.net/2024/Dec/24/...
Trying out QvQ—Qwen’s new visual reasoning model
I thought we were done for major model releases in 2024, but apparently not: Alibaba’s Qwen team just dropped the Apache2 2 licensed QvQ-72B-Preview, “an experimental research model focusing on …
simonwillison.net
December 24, 2024 at 8:52 PM