David Duvenaud
@davidduvenaud.bsky.social
930 followers 160 following 56 posts
Machine learning prof at U Toronto. Working on evals and AGI governance.
Posts Media Videos Starter Packs
davidduvenaud.bsky.social
More generally, we worry that liberalism itself is under threat - that the positive-sum-ness of laissez-faire governance won’t hold when citizens are mostly fighting over UBI. We hope we’re wrong!
davidduvenaud.bsky.social
“So far, we humans have been steering our civilisation on easy mode—wherever people went, they were indispensable. Now we have to hit a dauntingly narrow target: to create a civilisation that will care for us indefinitely—even when it doesn’t need us.”
davidduvenaud.bsky.social
“The average North Korean farmer has almost no power over the state, but they are still useful. The state can’t function unless it feeds its citizens. In an era of general automation, even this minimal duty of care will go.”
davidduvenaud.bsky.social
“The right to vote is the most visible sign of human influence over the state. But consider all the other levers of influence that come from economic power, such as lobbying, protesting and striking, which would also be eroded by mass automation.”
davidduvenaud.bsky.social
Some highlights:

“Democracies are still quite young, and were made possible only by technologies that made liberal, pluralistic societies globally competitive. We’re fortunate to have lived through this great confluence of human flourishing and state power, but we can’t take it for granted.”
davidduvenaud.bsky.social
Me and Raymond Douglas on how AI job loss could hurt democracy. “No taxation without representation” summarizes that historically, democratic rights flow from economic power. But this might work in reverse once we’re all on UBI: No representation without taxation!

bsky.app/profile/econ...
davidduvenaud.bsky.social
It's fair to say that people have wrongly predictive massive permanent unemployment before and been wrong. But our piece is asking what happens when everyone actually does become permanently unemployable.
davidduvenaud.bsky.social
I agree. I was just reading a LessWrong comment making a similar point:

"Liberalism's goal is to avoid the value alignment question, and to mostly avoid the question of who should control society, but AGI/ASI makes the question unavoidable for your basic life."

www.lesswrong.com/posts/onsZ4J...
www.lesswrong.com
davidduvenaud.bsky.social
It’ll be co-located with ICML. Our workshop is a separate event, so no need to register for ICML to attend ours! Ours is free but invite-only, please apply on our site:

www.post-agi.org

Co-organized with Raymond Douglas, Nora Ammann,
@kulveit.bsky.social, and @davidskrueger.bsky.social
davidduvenaud.bsky.social
- Are there multiple, qualitatively different basins of attraction of future civilizations?

- Do Malthusian conditions necessarily make it hard to preserve uncompetitive, idiosyncratic values?

- What empirical evidence could help us tell which trajectory we’re on?
davidduvenaud.bsky.social
Some empirical questions we hope to discuss:

- Could alignment of single AIs to single humans be sufficient to solve global coordination problems?

- Will agency tend to operate at ever-larger scales, multiple scales, or something else?
davidduvenaud.bsky.social
Some concrete topics we hope to address:

-What future trajectories are plausible?
-What mechanisms could support long-term legacies?
-New theories of agency, power, and social dynamics.
-AI representatives and new coordination mechanisms.
-How will AI alter cultural evolution?
davidduvenaud.bsky.social
And Anna Yelizarov, @fbarez.bsky.social, @scasper.bsky.social, Beatrice Erkers, among others.

We'll draw from political theory, cooperative AI, economics, mechanism design, history, and hierarchical agency.
davidduvenaud.bsky.social
It's hard to plan for AGI without knowing what outcomes are even possible, let alone good. So we’re hosting a workshop!

Post-AGI Civilizational Equilibria: Are there any good ones?

Vancouver, July 14th
www.post-agi.org

Featuring: Joe Carlsmith, @richardngo.bsky.social‬, Emmett Shear ... 🧵
Post-AGI Civilizational Equilibria Workshop | Vancouver 2025
Are there any good ones? Join us in Vancouver on July 14th, 2025 to explore stable equilibria and human agency in a post-AGI world. Co-located with ICML.
www.post-agi.org
davidduvenaud.bsky.social
Thanks for explaining, but I'm still confused. LLMs succeed regularly at following complex natural-language instructions without examples - it's their bread and butter. I agree they sometimes have problems executing algorithms consistently (unless fine-tuned to do so), but so do untrained humans.
davidduvenaud.bsky.social
"only those individuals who explicitly understood a task (via a natural language explanation) reached a correct solution whereas implicit trial and error reinforcement failed to converge. This ... has yet to be demonstrated in an LLM."

Is this claiming LLMs haven't been shown to benefit from hints?
davidduvenaud.bsky.social
Thanks for clarifying. I agree that singulatarian scenarios can be naive, breathless, and simplistic. But this piece seems to me to overstate its case if it's plausible AI will make most humans unemployable. I'd love to hear your thoughts about life after most work is automated, if you have time.
davidduvenaud.bsky.social
At that point, redistribution would be a life-or-death matter, and also would be disincentivized by competition within and between states. In your story, a Deus Ex Machina saved the protagonist, but I don't have a clear picture of what a realistic equilibrium would look like. Do you?
davidduvenaud.bsky.social
I'm confused why you're confident the downsides will be manageable. As you depicted in The Discrete Charm of the Turing Machine, even just being able to copy the best machine performers (near the level of the best humans) would make almost every human unable to compete, permanently.
davidduvenaud.bsky.social
We realize lots of people have worked on these before, or are already working on them now! We just wanted to list the main directions we're excited about that still seem wide open. We're all ears for suggestions!
davidduvenaud.bsky.social
10. Understand AI Agency. What does the world look like when there are 100,000 exact copies of yourself? When you can design bespoke sub-agents or formally commit to following a policy? It’s not even clear what the natural unit of identity is for an AI.
davidduvenaud.bsky.social
9. Simulate entire civilizations! Using LLMs, we can run tests on entire (simplified) civilizations. This can be a proxy for emergent human phenomena like cultural development, and could help characterize possible AI civilizations.
davidduvenaud.bsky.social
8. AI Complementarity. Most work is aimed at AI agents that fully replace humans, partly because it’s easier to get fast feedback. Can we build benchmarks or evaluations that reward supporting humans? Are there other ways to nudge things more towards augmentation?
davidduvenaud.bsky.social
7. Civilizational alignment and hierarchical agency. We might be able to model parts of civilizational dynamics: like game theory or information theory, but able to explain phenomena like the rise of some religions or historical instability of even the most powerful regimes.
davidduvenaud.bsky.social
6. Differential Progress. Some techs might extend human agency: - Superhuman mediation, bargaining, and arbitration - Privacy-preserving disclosure - Collective decision-making mechanisms - Provable neutrality Developing these public goods might delay gradual disempowerment.