Somin W
@sominw.bsky.social
31 followers 71 following 10 posts
cs phd @ northeastern. opinions on new england & beyond..
Posts Media Videos Starter Packs
sominw.bsky.social
📋 Why it matters: interpretable, controllable compression students that learn how the teacher thinks. We also see faster, cleaner training dynamics compared to baselines. Preprint + details: arxiv.org/abs/2509.25002 (4/4)
Circuit Distillation
Model distillation typically focuses on behavioral mimicry, where a student model is trained to replicate a teacher's output while treating its internal computations as a black box. In this work we pr...
arxiv.org
sominw.bsky.social
📘 How it works (high level): identify the teacher’s task circuit --> find functionally analogous student components via ablation --> align their internals during training. Outcome: the student learns the same computation, not just the outputs. (3/4)
sominw.bsky.social
🍎 On Entity Tracking and Theory of Mind, a student that updates only ~11–15% of attention heads inherits the teacher’s capability and closes much of the gap; targeted transfer over brute-force fine-tuning. (2/4)
sominw.bsky.social
🔊 New work w/ @silvioamir.bsky.social & @byron.bsky.social! We show you can distill a model’s mechanism, not just its answers -- teaching a small LM to run it's circuit same as a larger teacher model. We call it Circuit Distillation. (1/4)
sominw.bsky.social
4️⃣ Our analysis spans summarization, question answering, and instruction following, using models like Llama, Mistral, and Gemma as teachers. Across tasks, PoS templates consistently outperformed n-grams in distinguishing teachers 📊 [5/6]
sominw.bsky.social
3️⃣ But here’s the twist: Syntactic patterns (like Part-of-Speech templates) do retain strong teacher signals! Students unconsciously mimic structural patterns from their teacher, leaving behind an identifiable trace 🧩 [4/6]
sominw.bsky.social
2️⃣ Simple similarity metrics like BERTScore fail to attribute a student to its teacher. Even perplexity under the teacher model isn’t enough to reliably identify the original teacher. Shallow lexical overlap is just not a strong fingerprint 🔍 [3/6]
sominw.bsky.social
1️⃣ Model distillation transfers knowledge from a large teacher model to a smaller student model. But does the fine-tuned student reveal clues in its outputs about its origins? [2/6]