Dileep George @dileeplearning
dileeplearning.bsky.social
Dileep George @dileeplearning
@dileeplearning.bsky.social
AGI research @DeepMind.
Ex cofounder & CTO Vicarious AI (acqd by Alphabet),
Cofounder Numenta
Triply EE (BTech IIT-Mumbai, MS&PhD Stanford). #AGIComics
blog.dileeplearning.com
Reposted by Dileep George @dileeplearning
My latest on Substack -- a write-up of the talk I gave at NeurIPS in December.

aiguide.substack.com/p/on-evaluat...
On Evaluating Cognitive Capabilities in Machines (and Other "Alien" Intelligences)
(Apologies for the length of this post, which means it gets cut off in the email version.
aiguide.substack.com
January 14, 2026 at 6:43 PM
Reposted by Dileep George @dileeplearning
4/4 “Language is a mechanism to control other people’s mental simulations.” Dileep George, @dileeplearning, of @GoogleDeepMind at the Simons Institute workshop on The Future of Language Models and Transformers. Video: simons.berkeley.edu/talks/dileep...
On cognitive maps, LLMs, world models, and understanding
No abstract available.
simons.berkeley.edu
December 27, 2025 at 1:53 PM
Reposted by Dileep George @dileeplearning
1/4 Do LLMs understand? "They understand in a way that’s very different from how humans understand," Dileep George, @dileeplearning.bsky.social, of Google DeepMind at the Simons Institute workshop on The Future of Language Models and Transformers. Video: simons.berkeley.edu/talks/dileep...
December 27, 2025 at 1:53 PM
the one where #AGIComics figures out what academic training is all about …
www.agicomics.net/c/artificial...
AGI Comics — #35: Artificial General Scholarship
In academia, it's not just the citation counts that counts
www.agicomics.net
December 25, 2025 at 12:44 PM
Check out my blog on the risks of *not* building AGI!

blog.dileeplearning.com/p/if-no-one-...
If no one builds it, you're never born.
Not building AGI is a risky thing....
blog.dileeplearning.com
December 8, 2025 at 11:56 PM
Can AIs be conscious? Should we consider them as persons? Here are my current thoughts.....

blog.dileeplearning.com/p/ai-conscio...
AI consciousness, qualia, and personhood.
A note on my current positions, in a FAQ format.
blog.dileeplearning.com
November 7, 2025 at 4:35 PM
blog.dileeplearning.com/p/quick-note...

TLDR: It was fun and the process felt 'magical' at times. If you have lots of small project ideas you want to prototype, vibe-coding is a fun way to do that as long as you are willing to settle for 'good enough'.
Quick notes from vibe-coding a comic website
Hate less, vibe more!
blog.dileeplearning.com
October 20, 2025 at 4:19 PM
Those who think there's an AI bubble are unaware of a recent breakthrough....
www.agicomics.net/c/ag-breakth...
AGI Comics — #2: Artificial General Breakthrough
Self-congratulatory learning would be a cool name for a real learning algorithm.
www.agicomics.net
October 20, 2025 at 12:41 AM
New and improved and 10000% vibe-coded! Check out www.agicomics.net
AGI Comics — #1: Artificial General Productivity
A comic series.
www.agicomics.net
October 20, 2025 at 12:25 AM
Reposted by Dileep George @dileeplearning
1/4) I’m excited to announce that I have joined the Paradigms of Intelligence team at Google (github.com/paradigms-of...)! Our team, led by @blaiseaguera.bsky.social, is bringing forward the next stage of AI by pushing on some of the assumptions that underpin current ML.

#MLSky #AI #neuroscience
Paradigms of Intelligence Team
Advance our understanding of how intelligence evolves to develop new technologies for the benefit of humanity and other sentient life - Paradigms of Intelligence Team
github.com
September 23, 2025 at 3:06 PM
Reposted by Dileep George @dileeplearning
Jesus Christ.
September 17, 2025 at 5:46 PM
Reposted by Dileep George @dileeplearning
1/
🚨 New preprint! 🚨

Excited and proud (& a little nervous 😅) to share our latest work on the importance of #theta-timescale spiking during #locomotion in #learning. If you care about how organisms learn, buckle up. 🧵👇

📄 www.biorxiv.org/content/10.1...
💻 code + data 🔗 below 🤩

#neuroskyence
September 17, 2025 at 7:33 PM
#AGIComics now has a website! And it is 100% vibe coded!

Check out agicomics.net
AGI Comics — #23: Artificial General Productivity
A comic series.
agicomics.net
September 17, 2025 at 10:01 AM
Reposted by Dileep George @dileeplearning
12 leading neuroscientists tackle a big question: Will we ever understand the brain?

Their reflections span philosophy, complexity, and the limits of scientific explanation.

www.sainsburywellcome.org/web/blog/wil...

Illustration by @gilcosta.bsky.social & @joanagcc.bsky.social
August 6, 2025 at 8:41 AM
🎯
June 5, 2025 at 9:53 PM
Hmm…I don’t think it’s impossible.

Evolution could create structures in the brain that are in correspondence with structure in the world.
Dear neuroscientists,

The brain cannot generate information about the world de novo, it's impossible.

All the brain can do is:

1. Selectively remove info that is irrelevant.
2. Re-emit info previously absorbed via evolution or memory.

Our brain never "creates" information. Never.

🧠📈 🧪
May 15, 2025 at 6:02 PM
This paper turned up on a feed, I was intrigued by it and started reading...

..but then I was quite baffled because our CSCG work seem to have tackled many of these problems in a more general setting and it's not even mentioned!

So I asked ChatGPT... ...I'm impressed by the answer1. 1/🧵
May 15, 2025 at 1:22 AM
Wow, very cool to see this work from Alla Karpova's lab. She had shown me the results when I visited @hhmijanelia.bsky.social and I was blown away.

www.biorxiv.org/content/10.1...

1/
April 29, 2025 at 12:05 AM
Reposted by Dileep George @dileeplearning
𝗛𝗼𝘄 𝘀𝗵𝗼𝘂𝗹𝗱 𝘄𝗲 𝗱𝗲𝗳𝗶𝗻𝗲 𝗮𝗻𝗱 𝗱𝗲𝘁𝗲𝗿𝗺𝗶𝗻𝗲 𝗮 𝗯𝗿𝗮𝗶𝗻 𝗿𝗲𝗴𝗶𝗼𝗻'𝘀 𝗶𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝗰𝗲?
We introduce the idea of "importance" in terms of the extent to which a region's signals steer/contribute to brain dynamics as a function of brain state.
Work by @codejoydo.bsky.social
elifesciences.org/reviewed-pre...
Brain dynamics and spatiotemporal trajectories during threat processing
elifesciences.org
April 27, 2025 at 5:17 PM
It's kinda obvious. #AGIComics has already figured out which brain region is the most important. 😇
April 27, 2025 at 8:56 PM
ohh...yes...this is exactly what I think after reading some of the "deep research" reports. ....written by a committee
Worthwhile reading.

Some of the features that John rightly criticizes in AI writing are shared by the sort of committee reports and consensus papers that emerge from workshops and symposia because someone felt that there had to be a “product“ associated with the meeting.
I wrote a very long blog post about AI writing. I hope you'll read it.

meresophistry.substack.com/p/the-mental...
March 30, 2025 at 1:30 AM
Reposted by Dileep George @dileeplearning
jumping on the Gemini 2.5 bandwagon... it's an incredible model. really feels like an(other) inflection point. talking to Claude 3.7 feels like talking to a competent colleague who knows about everything, but makes mistakes. Gemini 2.5 feels like talking to a world-class expert with A+ intuitions
March 28, 2025 at 5:16 PM
Give me 10 billion dollars and I’ll do it. 1 billion for developing the hardware and 9 billion to pay for my opportunity cost 😇
Okay, random example:

Can you give me hardware for creating 100B parameter Boltzmann machines?
March 26, 2025 at 10:15 PM
Nope. It is an engineering problem. Give me an algorithm you think is not being scaled because of a hardware mismatch, I can make the hardware (chip + interconnect + datacenter) given enough money. Purely an engineering problem.
No you can't - that is super non-trivial.

Building hardware to run a specific algorithm is hard enough.

Building hardware that can run a specific algorithm *and* scale up to billions of parameters is super duper hard.

It's not just a matter of money... It's a scientific problem!
March 26, 2025 at 10:00 PM
ok...in that case which other existing model would you make a bet on scaling up? Pick one. I'd be happy to raise money for it.
Well, that's the kind of thing one would have to demonstrate.

As to SOTA - transformers are likely SOTA because they scale so well, which is largely because they won the "hardware lottery".

I am willing to bet other models would be SOTA if we could train them at similar scales.
March 26, 2025 at 7:29 PM