Scholar

Joel Z. Leibo

H-index: 41
Computer science 38%
Neuroscience 20%

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

jzleibo.bsky.social
Concordia was always an entity-compoment pattern. But it was improved in 2.0. Also, we realized that it was an important part of the story to emphasize and explain to anyone who didn't already know about it. So that's what we did in the 2.0 tech report.

Here:
arxiv.org/abs/2507.08892
Multi-Actor Generative Artificial Intelligence as a Game Engine
Generative AI can be used in multi-actor environments with purposes ranging from social science modeling to interactive narrative and AI evaluation. Supporting this diversity of use cases -- which we ...
arxiv.org

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

handle.invalid
Excited to reveal Genie 2, our most capable foundation world model that, given a single prompt image, can generate an endless variety of action-controllable, playable 3D worlds. Fantastic cross-team effort by the Open-Endedness Team and many other teams at Google DeepMind! 🧞
jparkerholder.bsky.social
Introducing 🧞Genie 2 🧞 - our most capable large-scale foundation world model, which can generate a diverse array of consistent worlds, playable for up to a minute. We believe Genie 2 could unlock the next wave of capabilities for embodied agents 🧠.

Reposted by: Joel Z. Leibo

tyrellturing.bsky.social
💯

Hallucination is totally the wrong word, implying it is perceiving the world incorrectly.

But it's generating false, plausible sounding statements. Confabulation is literally the perfect word.

So, let's all please start referring to any junk that an LLM makes up as "confabulations".
cianodonnell.bsky.social
petition to change the word describing ChatGPT's mistakes from 'hallucinations' to 'confabulations'

A hallucination is a false subjective sensory experience. ChatGPT doesn't have experiences!

It's just making up plausible-sounding bs, covering knowledge gaps. That's confabulation

Reposted by: Joel Z. Leibo

tedunderwood.com
How did we end up in a world where 1) in Jupyter notebooks, "return" means add a line but "shift-return" means execute cell,

whereas 2) in prompt fields for LLMs, "shift-return" means add a line and "return" means execute this prompt.

trivial annoyance, but it's a Top 5 Trivial Annoyance

Reposted by: Joel Z. Leibo

eugenevinitsky.bsky.social
The multiagent LLM people are going to do me in. What do you mean you told one agent it was a manager and the other a worker and it slightly improved performance

Reposted by: Joel Z. Leibo

mpshanahan.bsky.social
However, a good explanation for much of the sophisticated behaviour we see in today's chatbots is that they are role-playing. 3/4

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

Reposted by: Joel Z. Leibo

jzleibo.bsky.social
One can also explain human behavior this way, of course 😉, we are all role playing.

But I certainly agree with the advice you offer in this thread. Humans harm themselves when they stop playing roles that connect to other humans.
mpshanahan.bsky.social
However, a good explanation for much of the sophisticated behaviour we see in today's chatbots is that they are role-playing. 3/4

References

Fields & subjects

Updated 1m