Aran Nayebi
@anayebi.bsky.social
1.1K followers 490 following 160 posts
Assistant Professor of Machine Learning, Carnegie Mellon University (CMU) Building a Natural Science of Intelligence 🧠🤖
 Prev: ICoN Postdoctoral Fellow @MIT, PhD @Stanford NeuroAILab Personal Website: https://cs.cmu.edu/~anayebi
Posts Media Videos Starter Packs
anayebi.bsky.social
A nice application of our NeuroAI Turing Test! Check out
@ithobani.bsky.social's thread for more details on comparing brains to machines!
ithobani.bsky.social
1/X Our new method, the Inter-Animal Transform Class (IATC), is a principled way to compare neural network models to the brain. It's the first to ensure both accurate brain activity predictions and specific identification of neural mechanisms.

Preprint: arxiv.org/abs/2510.02523
anayebi.bsky.social
Academic paper: bsky.app/profile/anay...
anayebi.bsky.social
Can a Universal Basic Income (UBI) become feasible—even if AI fully automates existing jobs and creates no new ones?

We derive a closed-form UBI threshold tied to AI capabilities that suggests it's potentially achievable by mid-century even under moderate AI growth assumptions:
anayebi.bsky.social
Next time we discuss how to optimize these reward models via DPO/policy gradients!

Slides: www.cs.cmu.edu/~mgormley/co...

Full course info: bsky.app/profile/anay...
anayebi.bsky.social
Specifically, we cover methods which don't involve parameter-updating, e.g. In-Context Learning / Prompt-Engineering / Chain-of-Thought Prompting, to methods that do, such as Instruction Fine-Tuning & building on IFT to perform full-fledged Reinforcement Learning from Human Feedback (RLHF).
anayebi.bsky.social
In today's Generative AI lecture, we talk about all the different ways to take a giant auto-complete engine like an LLM and turn it into a useful chat assistant.
anayebi.bsky.social
In today's Generative AI lecture, we discuss the 4 primary approaches to Parameter-Efficient Fine-Tuning (PEFT): subset, adapters, Prefix/Prompt Tuning, and Low-Rank Adaptation (LoRA).

We show each of these amounts to finetuning a different aspect of the Transformer.
anayebi.bsky.social
6/6 I close with reflections on AI safety and alignment, and the Q&A explores open questions: from building physically accurate (not just photorealistic) world models to the role of autoregression and scale.

🎥Watch here: www.youtube.com/watch?v=5deM...

Slides: anayebi.github.io/files/slides...
RI Seminar: Aran Nayebi : Using Embodied Agents to Reverse-Engineer Natural Intelligence
YouTube video by CMU Robotics Institute
www.youtube.com
anayebi.bsky.social
5/6 I also touch on the Contravariance Principle/Platonic Representation Hypothesis, our proposed NeuroAI Turing Test, and why embodied agents are essential for building not just more capable, but also more reliable, autonomous systems.
anayebi.bsky.social
4/6 This journey culminates in our first task-optimized “NeuroAgent”, integrating advances in visual and tactile perception (including our NeurIPS ’25 oral), mental simulation, memory, and intrinsic curiosity.
anayebi.bsky.social
3/6 By grounding agents in perception, prediction, planning, memory, and intrinsic motivation — and validating them against large-scale neural data from rodents, primates, and zebrafish — we show how neuroscience and machine learning can form a unified *science of intelligence*.
anayebi.bsky.social
2/6 I present a cohesive framework that develops these notions further, grounded in both machine learning and experimental neuroscience.

In it, I outline our efforts over the past 4 years to set the capabilities of humans & animals as concrete engineering targets for AI.
anayebi.bsky.social
1/6 Recent discussions (e.g. Rich Sutton on @dwarkesh.bsky.social’s podcast) have highlighted why animals are a better target for intelligence — and why scaling alone isn’t enough.
In my recent @cmurobotics.bsky.social seminar talk, “Using Embodied Agents to Reverse-Engineer Natural Intelligence”,
anayebi.bsky.social
Check out our accompanying open-source library!
bsky.app/profile/anay...
anayebi.bsky.social
🚀 New Open-Source Release! PyTorchTNN 🚀
A PyTorch library for biologically-inspired temporal neural nets: unrolling computation through time. Integrates with our recent Encoder-Attender-Decoder, which flexibly combines models (Transformer, SSM, RNN) since no single one fits all sequence tasks.
🧵👇
anayebi.bsky.social
Excited to have this work accepted as an *oral* to NeurIPS 2025!
trinityjchung.com
1/ What if we make robots that process touch the way our brains do?
We found that Convolutional Recurrent Neural Networks (ConvRNNs) pass the NeuroAI Turing Test in currently available mouse somatosensory cortex data.
New paper by @Yuchen @Nathan @anayebi.bsky.social and me!
Task-Optimized Convolutional Recurrent Networks Align with Tactile Processing in the Rodent Brain
anayebi.bsky.social
Excited to have this work accepted to NeurIPS 2025! See you all in San Diego!
reecedkeller.bsky.social
1/ I'm excited to share recent results from my first collaboration with the amazing @anayebi.bsky.social
and @leokoz8.bsky.social !

We show how autonomous behavior and whole-brain dynamics emerge in embodied agents with intrinsic motivation driven by world models.
anayebi.bsky.social
In today's Generative AI lecture, we discuss how to implement Diffusion Models and go through their derivation. Next time, we discuss their deeper relationships with variational inference :)

Slides: www.cs.cmu.edu/~mgormley/co...

Full course info: bsky.app/profile/anay...
anayebi.bsky.social
In today's Generative AI lecture, we discuss Generative Adversarial Networks (GANs) & review probabilistic graphical models (PGMs) as a prelude to Diffusion models and VAEs, which we will discuss next time!

Slides: www.cs.cmu.edu/~mgormley/co...

Full course info: bsky.app/profile/anay...
anayebi.bsky.social
In today's Generative AI lecture, we cover Vision Transformers (as well as the broader notion of Encoder-Only Transformers).

We also explain the historical throughline to some of these ideas, inspired by Nobel-prize-winning observations in neuroscience!
anayebi.bsky.social
Actually the point of the present work isn't the % that's automated (though that's certainly a factor that can affect the UBI threshold), but more about the pressure part, that an AI with increasing capability lowers the societal "barrier-to-entry" because you don't have to increase the automation %
anayebi.bsky.social
Agreed, but AI might finally create both the surplus and the pressure to make it happen, even if I’m cautious about how human nature and politics play out.
anayebi.bsky.social
Totally agree. UBI here isn't meant to solve the meaning/purpose problem, but just to identify what societal levers there are to minimally cover the basics.