Chris Olah
@colah.bsky.social
6.4K followers 9 following 41 posts
Reverse engineering neural networks at Anthropic. Previously Distill, OpenAI, Google Brain.Personal account.
Posts Media Videos Starter Packs
Reposted by Chris Olah
nicholasgrossman.bsky.social
Political violence is bad. It usually begets more political violence.

Celebrating political violence is bad. It usually encourages more political violence, against various targets.

Campus shootings are bad. They make everyone on campus less safe.

It's bad that what I wrote here is controversial.
colah.bsky.social
The interpretability team will be mentoring more fellows this cycle, so if you're interested in interpretability, it might be worth applying!

Some of our fellows last cycle did this: arxiv.org/pdf/2507.21509
arxiv.org
colah.bsky.social
But more importantly, I hope it will just help clarify what we mean by interference weights!
colah.bsky.social
Our new note demonstrates that interference weights in toy models can demonstrate strikingly similar phenomenology to that of Towards Monosemanticity...
colah.bsky.social
The keen reader may recall all these plots referencing "interference weights??" in Towards Monosemanticity (transformer-circuits.pub/2023/monosem...).
colah.bsky.social
I've been talking about interference weights as a challenge for mechanistic interpretability for a while.

A short note discussing them - transformer-circuits.pub/2025/interfe...
A Toy Model of Interference Weights
transformer-circuits.pub
colah.bsky.social
I should also mention that I wrote a blog post listing a bunch of specific analogies between deep learning and biology several years back. (It's probably of much narrower interest!)

colah.github.io/notes/bio-an...
Analogies between Biology and Deep Learning [rough note]
A list of advantages that make understanding artificial nerural networks much easier than biological ones.
colah.github.io
colah.bsky.social
Of course, I'd be remiss to not mention that many others have made analogies between work in machine learning and biology -- most notable for us is the "bertology" work, which framed it self as studying the biology of the BERT models.
colah.bsky.social
But we also think it's important for such "biology" results (which are more foreign in style to machine learning) to be treated as worthy of publication independent of methods work (which looks more similar to normal machine learning).
colah.bsky.social
This was partly a convenient way to handle the length (jointly, the two papers are ~150 pages!).
colah.bsky.social
But why did the language come up in our paper title? There was actually a further reason, which is that we wanted to separate our "methods" work and what we called our "biology" work (i.e. the empirical research we did using our method).
colah.bsky.social
Finally, you need to believe that a worthy mode of investigation is empirical (rather than theoretical), and a style of empirical research that's more open to the qualitative than purely quantitative.

This evokes biology more than physics.
colah.bsky.social
One further needs to believe that individual neural networks, and in fact sub-components of those networks, warrant investigation. That's more idiosyncratic!
colah.bsky.social
At a basic level, one needs to believe deep learning warrants scientific investigation. This doesn't seem very controversial these days, but note that it's already kind of radical. See eg. Herbert Simon's The Sciences of the Artificial.
colah.bsky.social
I've written multiple papers characterizing (small sets of) individual neurons. Historically, this hasn't seemed like a worthy topic of a paper in ML – I've had to justify it!
colah.bsky.social
One way in which this is important is that the *types of questions* we're interested in are quite bizarre from a traditional machine learning perspective, but natural under the biological frame.
colah.bsky.social
I think there's a deep way in which the scientific aesthetic of biology is very relevant to deep learning and especially interpretability.

Biology is to evolution as interpretability is to gradient descent.

bsky.app/profile/cola...
colah.bsky.social
The elegance of ML is the elegance of biology, not the elegance of math or physics.

Simple gradient descent creates mind-boggling structure and behavior, just as evolution creates the awe inspiring complexity of nature.

x.com/banburismus_...
x.com
colah.bsky.social
Stepping back, "physics of neural networks" is a whole area of research. Of course, it isn't physics in a classical sense. It's bringing the methods and style of physics to deep learning.

We refer to the "biology" of neural networks in a similar spirit!
colah.bsky.social
My colleagues and I have actually been using "biology" quite heavily as a metaphor and handle for several years now, beyond the title of this paper. There are a lot of reasons I think it's useful!
colah.bsky.social
A number of people have asked me why we titled our recent paper "On the Biology of a Large Language Model".

Why call it "biology"?
colah.bsky.social
Every model is its own entire world of beautiful structure waiting to be discovered, if only we care to look.
colah.bsky.social
I wish people would spend more time looking at the models we create though. It's like we're launching expeditions with complex equipment to reach more and more remote islands and tall mountains... and the biology stops at measuring the size and weight of the animals we find.