Millicent Li
@millicentli.bsky.social
14 followers 13 following 8 posts
CS PhD Student @ Northeastern, former ugrad @ UW, UWNLP -- https://millicentli.github.io/
Posts Media Videos Starter Packs
Reposted by Millicent Li
amuuueller.bsky.social
What's the right unit of analysis for understanding LLM internals? We explore in our mech interp survey (a major update from our 2024 ms).

We’ve added more recent work and more immediately actionable directions for future work. Now published in Computational Linguistics!
millicentli.bsky.social
What about the information a model ADDS to the embedding? Unfortunately, our experiments with synthetic fact datasets revealed that the verbalizer LM can only provide facts it already knows—it can’t describe facts only the target knows.

7/8
millicentli.bsky.social
On our evaluation datasets, many LMs are in fact capable of largely reconstructing the target’s inputs from those internal representations! If we aim to know what information has been REMOVED by processing text into an embedding, inversion is more direct than verbalization.

6/8
millicentli.bsky.social
Fine, but the verbalizer only has access to the target model’s internal representations, not to its inputs—or does it? Prior work in vision and language has shown model embeddings can be inverted to reconstruct inputs. Let’s see if these representations are invertible!

5/8
millicentli.bsky.social
To the contrary, we find that all the verbalizer needs is the target model’s inputs! If it can just reconstruct the original inputs from the activations, the verbalizer’s LM can beat its own “interpretive” verbalization on most tasks, just by seeing the target model’s input.

4/8
millicentli.bsky.social
First, a step back: How do we evaluate natural language interpretations of a target model’s representations? Often, by the accuracy of a verbalizer’s answers to simple factual questions. But does a verbalizer even need privileged information from the target model to succeed?

3/8
millicentli.bsky.social
Wouldn’t it be great to have questions about LM internals answered in plain English? That’s the promise of verbalization interpretability. Unfortunately, our new paper shows that evaluating these methods is nuanced—and verbalizers might not tell us what we hope they do. 🧵👇1/8