Joel Z Leibo
@jzleibo.bsky.social
3.2K followers 240 following 39 posts
I can be described as a multi-agent artificial general intelligence. OK, so some people pointed out that I am not in fact artificial, contradicting my bio. To them I would reply that I am likely also a cognitive gadget. www.jzleibo.com
Posts Media Videos Starter Packs
jzleibo.bsky.social
Concordia was always an entity-compoment pattern. But it was improved in 2.0. Also, we realized that it was an important part of the story to emphasize and explain to anyone who didn't already know about it. So that's what we did in the 2.0 tech report.

Here:
arxiv.org/abs/2507.08892
Multi-Actor Generative Artificial Intelligence as a Game Engine
Generative AI can be used in multi-actor environments with purposes ranging from social science modeling to interactive narrative and AI evaluation. Supporting this diversity of use cases -- which we ...
arxiv.org
jzleibo.bsky.social
One can also explain human behavior this way, of course 😉, we are all role playing.

But I certainly agree with the advice you offer in this thread. Humans harm themselves when they stop playing roles that connect to other humans.
mpshanahan.bsky.social
However, a good explanation for much of the sophisticated behaviour we see in today's chatbots is that they are role-playing. 3/4
jzleibo.bsky.social
Congratulations!!!! 🎉
Reposted by Joel Z Leibo
wbarfuss.bsky.social
How to cooperate for a sustainable future? We don't know (yet), but I'm thrilled to share that our new perspective piece has just been published in @pnas.org. Bridging complexity science and multiagent reinforcement learning can lead to a much-needed science of collective, cooperative intelligence.
Reposted by Joel Z Leibo
tedunderwood.com
Love the insight that language models have been more widely useful than video models.

For Levine, this is because LLMs kind of sneakily and indirectly model the brain. I would be tempted to say instead that shared languages are more powerful than single brains. But either way, it’s a good read!
Language Models in Plato's Cave
Why language models succeeded where video models failed, and what that teaches us about AI
open.substack.com
jzleibo.bsky.social
Confabulate is exactly the right word for it -- used exactly this way in neuroscience for decades. I see some people also pointing out that Frankfurt's "bullshit" also works, and I agree with that too. Confabulate seems better for obvious reasons though
Reposted by Joel Z Leibo
davidpfau.com
The idea of "AI alignment" grew out of a community that thought you could solve morals like it was a CS problem set. Nice to see a more nuanced take.
jzleibo.bsky.social
Thanks to my coauthors @sebk.bsky.social, @sindero.bsky.social, Sasha Vezhnevets, Wil Cunningham, and @manfreddiaz.bsky.social
jzleibo.bsky.social
It's a revised version of the post with the same name we previously made on LessWrong. This version adds references and attempts to incorporate responses to most of the comments we had on the original.
jzleibo.bsky.social
First LessWrong post! Inspired by Richard Rorty, we argue for a different view of AI alignment, where the goal is "more like sewing together a very large, elaborate, polychrome quilt", than it is "like getting a clearer vision of something true and deep"
www.lesswrong.com/posts/S8KYwt...
Societal and technological progress as sewing an ever-growing, ever-changing, patchy, and polychrome quilt — LessWrong
We can just drop the axiom of rational convergence.
www.lesswrong.com
jzleibo.bsky.social
That's nothing, it also improves performance if you tell the worker what they ate for breakfast!
eugenevinitsky.bsky.social
The multiagent LLM people are going to do me in. What do you mean you told one agent it was a manager and the other a worker and it slightly improved performance
Reposted by Joel Z Leibo
natolambert.bsky.social
A silly example of this being unexpected for AI forecasters
Reposted by Joel Z Leibo
sharky6000.bsky.social
Looking for a principled evaluation method for ranking of *general* agents or models, i.e. that get evaluated across a myriad of different tasks?

I’m delighted to tell you about our new paper, Soft Condorcet Optimization (SCO) for Ranking of General Agents, to be presented at AAMAS 2025! 🧵 1/N
jzleibo.bsky.social
CAIF's new and massive report on multi-agent AI risks will be really useful resource for the field
www.cooperativeai.com/post/new-rep...
Cooperative AIPlaintext Code Block
www.cooperativeai.com
Reposted by Joel Z Leibo
jeffdean.bsky.social
I can't believe they've just cancelled the Epidemic Intelligence Service program at CDC.

This program trains the best & brightest epidemiologists, who then go on to have distinguished careers in public health, serving at CDC, in state health departments, overseas, ...
Reposted by Joel Z Leibo
turrigiano.bsky.social
Happy to see SfN sign on to this
aaberhe.com
Professional societies stand together in defense of science & scientists.
“science has led to humanity’s greatest advances, improving people’s lives & the health of our planet … (we’re) committed to supporting, elevating, & fighting for science & those who further it.“
www.unitedsciencealliance.org