Sasha Boguraev
@sashaboguraev.bsky.social
140 followers 250 following 24 posts
Compling PhD student @UT_Linguistics | prev. CS, Math, Comp. Cognitive Sci @cornell
Posts Media Videos Starter Packs
sashaboguraev.bsky.social
(Very much inspired by discussions at COLM interplay workshop discussions yesterday)
sashaboguraev.bsky.social
I guess in the case of the teaching chess to grandmasters the superhuman performance was humanly intelligible (once broken down). On the other hand do we have any idea what move 37 was doing (half rhetorical question and half I’m genuinely curious if there’s been interp work here that’s convincing)?
sashaboguraev.bsky.social
From an interp perspective I think the question is, will we still be able to find human recognizable features which faithfully describe the model’s activity? Or will it just be completely unrecognizable?

(In reality it’s probably something in between)
sashaboguraev.bsky.social
Curious as to if people think if (when?) ‘superhuman AI’ arrives, will the building blocks of its performance be human recognizable concepts which have been applied and combined in new and novel ways to achieve ‘superhuman’ performance? Or will it be completely uninterpretable?
Reposted by Sasha Boguraev
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
sashaboguraev.bsky.social
I will be giving a short talk on this work at the COLM Interplay workshop on Friday (also to appear at EMNLP)!

Will be in Montreal all week and excited to chat about LM interpretability + its interaction with human cognition and ling theory.
sashaboguraev.bsky.social
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.

New work with @kmahowald.bsky.social and @cgpotts.bsky.social!

🧵👇!
Reposted by Sasha Boguraev
kanishka.bsky.social
The compling group at UT Austin (sites.utexas.edu/compling/) is looking for PhD students!

Come join me, @kmahowald.bsky.social, and @jessyjli.bsky.social as we tackle interesting research questions at the intersection of ling, cogsci, and ai!

Some topics I am particularly interested in:
Picture of the UT Tower with "UT Austin Computational Linguistics" written in bigger font, and "Humans processing computers processing human processing language" in smaller font
sashaboguraev.bsky.social
No worries! Was just in NYC and figured it worth an ask. Thanks for the pointer.

Separately, would be great to catch up next time I’m around!
sashaboguraev.bsky.social
Open to non NYU-affiliates?
sashaboguraev.bsky.social
Wholeheartedly pledging my allegiance to any and all other airlines
sashaboguraev.bsky.social
Breaking my years-long vow to never fly American Airlines just to be met with a 6 hr delay and 5am arrival back home 🫠
sashaboguraev.bsky.social
But surely there is important novelty in answering both of those questions? Building a novel system/entity and generating a novel proof — inherent to that must be some new ideas by virtue of the questions not being previously answered.

I’m not sure I buy the idea that novelty has to be technical.
sashaboguraev.bsky.social
We believe this work shows how mechanistic analyses can provide novel insights into syntactic structures — making good on the promise that studying LLMs can help us better understand linguistics by helping us develop linguistically interesting hypotheses!

📄: arxiv.org/abs/2505.16002
Causal Interventions Reveal Shared Structure Across English Filler-Gap Constructions
Large Language Models (LLMs) have emerged as powerful sources of evidence for linguists seeking to develop theories of syntax. In this paper, we argue that causal interpretability methods, applied to ...
arxiv.org
sashaboguraev.bsky.social
In our last experiment, we probe whether the mechanisms used to process single-clause variants of these constructions generalize to the matrix and embedded clauses of our multi-clause variants. However, we find little evidence of this transfer across our constructions.
sashaboguraev.bsky.social
This begs the question: what drives constructions to take on these roles? We uncover that a combination of frequency and linguistic similarity is to blame. Namely, less frequent constructions utilize the mechanisms LMs have developed to deal with more frequent, linguistically similar constructions!
sashaboguraev.bsky.social
We then dive deeper, training interventions on individual constructions and evaluating them across all others, allowing us to build generalization networks. Network analysis reveals clear roles — some constructions act as sources, others as sinks.
sashaboguraev.bsky.social
We first train interventions on n-1 constructions and test on all, including the held-out one.

Across all positions, we find above-chance transfer of mechanisms with significant positive transfer when the evaluated construction is in the train set, and when the train and eval animacy match.
sashaboguraev.bsky.social
We use DAS to train interventions, localizing the processing mechanisms specific to given sets of filler-gaps. We then take these interventions, and evaluate them on other filler-gaps. Any observed causal effect duly suggests shared mechanisms across the constructions.
sashaboguraev.bsky.social
Our investigation focuses on 7 filler–gap constructions: 2 classes of embedded wh-questions, matrix-level wh-questions, restrictive relative clauses, clefts, pseudoclefts, & topicalization. For each construction, we make 4 templates split by animacy of the extraction and number of embedded clauses.
sashaboguraev.bsky.social
A key hypothesis in the history of linguistics is that different constructions share underlying structure. We take advantage of recent advances in mechanistic interpretability to test this hypothesis in Language Models.

New work with @kmahowald.bsky.social and @cgpotts.bsky.social!

🧵👇!
Reposted by Sasha Boguraev
qyao.bsky.social
LMs learn argument-based preferences for dative constructions (preferring recipient first when it’s shorter), consistent with humans. Is this from memorizing preferences in training? New paper w/ @kanishka.bsky.social , @weissweiler.bsky.social , @kmahowald.bsky.social

arxiv.org/abs/2503.20850
examples from direct and prepositional object datives with short-first and long-first word orders: 
DO (long first): She gave the boy who signed up for class and was excited it.
PO (short first): She gave it to the boy who signed up for class and was excited.
DO (short first): She gave him the book that everyone was excited to read.
PO (long-first): She gave the book that everyone was excited to read to him.
Reposted by Sasha Boguraev
siyuansong.bsky.social
New preprint w/ @jennhu.bsky.social @kmahowald.bsky.social : Can LLMs introspect about their knowledge of language?
Across models and domains, we did not find evidence that LLMs have privileged access to their own predictions. 🧵(1/8)
sashaboguraev.bsky.social
Do you have any thoughts on whether these a) emerged naturally during the RL phase of training (rather than being specifically engineered to encourage more generation or an artifact of some other post-training phase) and if so b) actually represent backtracking in the search?
sashaboguraev.bsky.social
I'm curious as to what you think of the explicit backtracking in the reasoning model's chains of thoughts? I agree that much of the CoT feels odd and unfaithful, but also there's something that feels very easily anthromorphizable in the various “oh wait”s, and “now I see”s.
sashaboguraev.bsky.social
Been spending sometime over break making my way through the Bayesian Models of Cognition book — great read.