Anima Anandkumar
banner
anima-anandkumar.bsky.social
Anima Anandkumar
@anima-anandkumar.bsky.social
AI Pioneer, AI+Science, Professor at Caltech, Former Senior Director of AI at NVIDIA, Former Principal Scientist at AWS AI.
An exciting collaboration with @francesarnold.bsky.social on AI+enzymes. This combines generative protein models with carefully tuned filters that resulted in functional and versatile enzymes beating natural and previously engineered enzymes.
Generated PLP-dependent Trp synthases are functional, stable, and exhibit usefully broad substrate scopes! fun collaboration with @anima-anandkumar.bsky.social @ramanathanlab.bsky.social Amin Takavoli. I love AI + #enzymes!
Finetune a codon-level language model with 30k tryptophan synthases, then generate diverse, functional, enzymes with broad substrate scopes.

Théophile Lambert @jsunn-y.bsky.social @francesarnold.bsky.social

www.biorxiv.org/content/10.1...
December 16, 2025 at 3:23 AM
Reposted by Anima Anandkumar
Finetune a codon-level language model with 30k tryptophan synthases, then generate diverse, functional, enzymes with broad substrate scopes.

Théophile Lambert @jsunn-y.bsky.social @francesarnold.bsky.social

www.biorxiv.org/content/10.1...
December 16, 2025 at 12:25 AM
Reposted by Anima Anandkumar
Generated PLP-dependent Trp synthases are functional, stable, and exhibit usefully broad substrate scopes! fun collaboration with @anima-anandkumar.bsky.social @ramanathanlab.bsky.social Amin Takavoli. I love AI + #enzymes!
Finetune a codon-level language model with 30k tryptophan synthases, then generate diverse, functional, enzymes with broad substrate scopes.

Théophile Lambert @jsunn-y.bsky.social @francesarnold.bsky.social

www.biorxiv.org/content/10.1...
December 16, 2025 at 12:46 AM
Reposted by Anima Anandkumar
Join our happy hour meetup today at NeurIPS to chat about AI+Science and AI+Math with me and my team from @caltechedu at:

Achilles Coffee Roasters Gaslamp, San Diego.
4:45pm - 6:30pm

I will be announcing one more meetup later this week if you can't make it to this one. Stay tuned!
December 2, 2025 at 6:56 PM
Reposted by Anima Anandkumar
Join @aratip.bsky.social, Vivek Vishwanathan & @anima-anandkumar.bsky.social for UC Berkeley’s #TechPolicyWeek!

“Bigger, Better Ambitions for AI” — exploring how #AI can drive positive impact.

Oct 20 | 3–4:15pm | 2400 Ridge Rd

@BerkeleyISchool.bsky.social @GoldmanSchool.bsky.social
Bigger, Better Ambitions for AI
How can we advance efforts to harness AI to deliver positive impacts to people's lives?
www.eventbrite.com
October 11, 2025 at 10:50 PM
I am thrilled to see Omar Yaghi win the Nobel Prize in Chemistry today. I have had the privilege to interact with him and collaborate with him. This is a paper from a couple of years ago using generative models for MOFs with @ucberkeleyofficial.bsky.social group. pubs.acs.org/doi/10.1021/...
Shaping the Water-Harvesting Behavior of Metal–Organic Frameworks Aided by Fine-Tuned GPT Models
We construct a data set of metal–organic framework (MOF) linkers and employ a fine-tuned GPT assistant to propose MOF linker designs by mutating and modifying the existing linker structures. This strategy allows the GPT model to learn the intricate language of chemistry in molecular representations, thereby achieving an enhanced accuracy in generating linker structures compared with its base models. Aiming to highlight the significance of linker design strategies in advancing the discovery of water-harvesting MOFs, we conducted a systematic MOF variant expansion upon state-of-the-art MOF-303 utilizing a multidimensional approach that integrates linker extension with multivariate tuning strategies. We synthesized a series of isoreticular aluminum MOFs, termed Long-Arm MOFs (LAMOF-1 to LAMOF-10), featuring linkers that bear various combinations of heteroatoms in their five-membered ring moiety, replacing pyrazole with either thiophene, furan, or thiazole rings or a combination of two. Beyond their consistent and robust architecture, as demonstrated by permanent porosity and thermal stability, the LAMOF series offers a generalizable synthesis strategy. Importantly, these 10 LAMOFs establish new benchmarks for water uptake (up to 0.64 g g–1) and operational humidity ranges (between 13 and 53%), thereby expanding the diversity of water-harvesting MOFs.
pubs.acs.org
October 8, 2025 at 7:15 PM
Reposted by Anima Anandkumar
Our new paper on AI-generated TrpBs with @anima-anandkumar.bsky.social. GenSLM generated very useful promiscuous TrpB #enzymes, bypassing a lot of #directedevolution! great work by the whole team, especially Theophile Lambert. www.biorxiv.org/content/10.1...
www.biorxiv.org
September 4, 2025 at 2:58 PM
Very pleased to see our AI model GenSLM designing novel and versatile enzymes in a challenging setting in
@francesarnold.bsky.social lab in the tryptophan synthase (TrpB) family. www.biorxiv.org/content/10.1...
AI can create novel enzymes outperformed both natural and laboratory-optimized TrpBs.
www.biorxiv.org
September 3, 2025 at 7:31 PM
End-to-end learning can use both approximate and accurate training data, if the model can learn how to mix them correctly. It turns out that Neural Operators offer a perfect solution when such multi-fidelity and multi-resolution data is available, and can learn with high data efficiency.
September 2, 2025 at 12:41 AM
Our latest paper surprisingly shows that it is not the case! End to end also requires less training data compared to methods that keep existing numerical solvers and augment with AI. Where do savings come from? The approach that augments AI relies only on fully accurate expensive training data.
September 2, 2025 at 12:40 AM
We have seen end-to-end approach win in areas like weather forecasting. It is significantly better for speed: 1000-million x faster than numerical simulations in many areas such as fluid dynamics, plasma physics etc. But a big argument against it is the need for expensive training data.
September 2, 2025 at 12:38 AM
Popular prescription is to augment AI into existing workflows rather than replace them, e.g., keep the approximate numerical solver for simulations, and use AI only to correct its errors in every time step. Other extreme is to completely discard the existing workflow and replace it fully with AI.
September 2, 2025 at 12:37 AM
How do we build AI for science? Augment with AI or replace with AI? Augment with AI involves keeping existing numerical simulations. In our latest paper, we show end-to-end learning is faster significantly and also wins in data efficiency, which is counterintuitive. arxiv.org/pdf/2408.05177 #ai
September 2, 2025 at 12:37 AM
Thank you @cvprconference.bsky.social for hosting my
IEEE Kiyo Tomiyasu award for bringing AI to scientific domains with Neural Operators and physics-informed learning. The future of science is AI+Science!
corporate-awards.ieee.org/award/ieee-k...
June 16, 2025 at 3:03 AM
Reposted by Anima Anandkumar
🚨We propose EquiReg, a generalized regularization framework that uses symmetry in generative diffusion models to improve solutions to inverse problems. arxiv.org/abs/2505.22973

@aditijc.bsky.social, Rayhan Zirvi, Abbas Mammadov, @jiacheny.bsky.social, Chuwei Wang, @anima-anandkumar.bsky.social 1/
June 12, 2025 at 3:47 PM
Thank you @caltech.edu for including me in the history of AI. It starts with Carver Mead, John Hopfield and Richard Feynman teaching a course on physics of computation. Not many are aware that the main AI conference, NeurIPS, started at @caltech.edu

magazine.caltech.edu/post/ai-mach...
The Roots of Neural Network: How Caltech Research Paved the Way to Modern AI — Caltech Magazine
Tracing the roots of neural networks, the building blocks of modern AI, at Caltech. By Whitney Clavin
magazine.caltech.edu
June 10, 2025 at 5:34 PM
Reposted by Anima Anandkumar
Check out our new preprint 𝐓𝐞𝐧𝐬𝐨𝐫𝐆𝐑𝐚𝐃.
We use a robust decomposition of the gradient tensors into low-rank + sparse parts to reduce optimizer memory for Neural Operators by up to 𝟕𝟓%, while matching the performance of Adam, even on turbulent Navier–Stokes (Re 10e5).
June 3, 2025 at 3:17 AM
Reposted by Anima Anandkumar
Thanks to my co-authors David Pitt, Robert Joseph George, Jiawwei Zhao, Cheng Luo, Yuandong Tian, Jean Kossaifi, @anima-anandkumar.bsky.social, and @caltech.edu for hosting me this spring!
Paper: arxiv.org/abs/2501.02379
Code: github.com/neuraloperat...
TensorGRaD: Tensor Gradient Robust Decomposition for Memory-Efficient Neural Operator Training
Scientific problems require resolving multi-scale phenomena across different resolutions and learning solution operators in infinite-dimensional function spaces. Neural operators provide a powerful fr...
arxiv.org
June 3, 2025 at 3:17 AM
It was an honor to be part of Google IO Dialogues stage and talk about AI+Science.

AI needs to understand the physical world to make new scientific discoveries.

LLMs come up with new ideas, but bottleneck is testing in real world.

Physics-informed learning is needed

youtu.be/NYtQuneZMXc?...
Science in the age of AI
YouTube video by Google for Developers
youtu.be
June 1, 2025 at 6:21 PM
In a recent interview I talk about what it takes for AI to make new scientific discoveries. tldr: it won’t be just LLMs. www.newindiaabroad.com/english/tech...
Indian American professor Anima Anandkumar on developing AI for new scientific discoveries
Learn how Indian American professor Anima Anandkumar is revolutionizing the world of artificial intelligence to drive new scientific discoveries. Explore her cutting-edge research and innovative appro...
www.newindiaabroad.com
May 25, 2025 at 11:26 PM
Thank you EO for coming to @caltech.edu interviewing me on #ai I talk about the need to keep being curious and use AI as a tool, rather than being afraid of AI. I talk about AI for scientific modeling and discovery, and training the first high-resolution AI-based weather model. youtu.be/FIxLJVthW6I
Caltech AI Professor: The One Skill AI Can't Replace | Anima Anandkumar
YouTube video by EO
youtu.be
May 4, 2025 at 8:41 PM
Reposted by Anima Anandkumar
We have released VARS-fUSI: Variable sampling for fast and efficient functional ultrasound imaging (fUSI) using neural operators.

The first deep learning fUSI method to allow for different sampling durations and rates during training and inference. biorxiv.org/content/10.1... 1/
April 28, 2025 at 5:55 PM