Denis Sutter
@denissutter.bsky.social
9 followers 3 following 10 posts
Msc at @eth interested in ML interpretability
Posts Media Videos Starter Packs
denissutter.bsky.social
9/9
Paper title: The Non-Linear Representation Dilemma: Is Causal Abstraction Enough for Mechanistic Interpretability?
Authored by Denis Sutter, @jkminder, Thomas Hoffmann, and @tpimentelms.
denissutter.bsky.social
8/9 In summary, causal abstraction remains a valuable framework, but without explicit assumptions about how mechanisms are represented, it risks producing interpretability results that are not robust or meaningful.
denissutter.bsky.social
7/9 For generality, we present these findings on simpler architectures (MLPs) across multiple random seeds and two additional tasks. This indicates that the issue is not confined to LLMs, but applies more broadly.
denissutter.bsky.social
6/9 We further show that small LLMs, which fail at the Indirect Object Identification task, can nevertheless be interpreted as containing such an algorithm.
denissutter.bsky.social
5/9 Beyond the theoretical argument, we present a broad set of experiments supporting our claim. Most notably, we show that a randomly initialised LLM can be interpreted as implementing an algorithm for Indirect Object Identification.
denissutter.bsky.social
4/9 This occurs because the existing theoretical framework makes no structural assumptions about how mechanisms are encoded in distributed representations. This relates to the accuracy complexity trade-off of probing.
denissutter.bsky.social
3/9 While we do not critique causal abstraction as a framework, we show that combining it with current insights that modern models store information in a distributed way introduces a fundamental problem.
denissutter.bsky.social
2/9 We demonstrate both theoretically (under reasonable assumptions) and empirically on real-world models that with arbitrarily complex representations, any algorithm can be mapped to any model.
denissutter.bsky.social
1/9 In our new interpretability paper, we analyse causal abstraction—the framework behind Distributed Alignment Search—and show it breaks when we remove linearity constraints on feature representations. We refer to this problem as the Non-Linear Representation Dilemma.