Gabriele Sarti
banner
gsarti.com
Gabriele Sarti
@gsarti.com
Postdoc @ Northeastern, @ndif-team.bsky.social with @davidbau.bsky.social. Interpretability ∩ HCI ∩ #NLProc. Creator of @inseq.org. Prev: PhD @gronlp.bsky.social, ML @awscloud.bsky.social & Aindo

gsarti.com
Pinned
I've decided to start a book thread for 2025 to share cool books and stay focused on my reading goals. Here we go! 📚
Had a bit of fun with @anthropic.com Claude artifacts tonight and ended up with two demos that feel pretty useful for language learners: an assistive reader that lets you export/practice new words, and a translation helper that explains your mistakes.

Find them here: gsarti.com/langlearn
February 1, 2026 at 2:25 AM
Reposted by Gabriele Sarti
Does it matter how you prompt an LLM with a persona? Do LLMs respond differently to natural conversation history compared to names and explicit mentions? Go check out our new preprint! 👀
Even with identical sociodemographic info, how LLMs are given it changes downstream bias results. Our new preprint (w/ @veraneplenbroek.bsky.social, Jan Batzner & Sebastian Padó) tests cues with varying external validity across 10 personas, 4 tasks & 7 LLMs: arxiv.org/abs/2601.18572
January 28, 2026 at 4:24 PM
Reposted by Gabriele Sarti
The Art of Wanting.

About the question I see as central in AI ethics, interpretability, and safety. Can an AI take responsibility? I do not think so, but *not* because it's not smart enough.

davidbau.com/archives/20...
January 27, 2026 at 3:32 PM
Reposted by Gabriele Sarti
Can models understand each other's reasoning? 🤔

When Model A explains its Chain-of-Thought (CoT) , do Models B, C, and D interpret it the same way?

Our new preprint with @davidbau.bsky.social and @csinva.bsky.social explores CoT generalizability 🧵👇

(1/7)
January 22, 2026 at 9:59 PM
Reposted by Gabriele Sarti
Can you solve this algebra puzzle? 🧩

cb=c, ac=b, ab=?

A small transformer can learn to solve problems like this!

And since the letters don't have inherent meaning, this lets us study how context alone imparts meaning. Here's what we found:🧵⬇️
January 22, 2026 at 4:09 PM
It was an honor to be part of this awesome project! Interpreto is a great up-and-coming tool for concept-based interpretability analyses of NLP models, check it out!
🔥I am super excited for the official release of an open-source library we've been working on for about a year!

🪄interpreto is an interpretability toolbox for HF language models🤗. In both generation and classification!

Why do you need it, and for what?

1/8 (links at the end)
January 21, 2026 at 4:20 AM
Reposted by Gabriele Sarti
New year, new YouTube videos! We are resuming our regular interpretability seminar posts, with a fantastic talk by Deepti Ghadiyaram on interpreting diffusion models.

Watch the video: youtu.be/4eqvABPX5rA
Interpreting and Leveraging Diffusion Representations with Deepti Ghadiyaram
Deepti Ghadiyaram is an Assistant Professor at Boston University in the Department of Computer Science, with affiliated appointments in Electrical and Comput...
www.youtube.com
January 15, 2026 at 9:20 PM
Reposted by Gabriele Sarti
All interpretability research is either philosophy (affectionate) or stamp collecting (derogatory)
January 11, 2026 at 8:47 PM
The NDIF ecosystem is growing! 🚀 nnterp will bridge the gap between fine-grained fiddling with model internals (nnsight) and low-code access to bespoke viz (workbench). Excited to work with @butanium.bsky.social and the @ndif-team.bsky.social to make it a standard in interp research!
nnterp by @butanium.bsky.social is now part of the NDIF ecosystem! nnterp standardizes transformer naming conventions, includes built-in best practices for common interventions, and is perfectly compatible with original HF model implementations.

Learn more: ndif-team.github.io/nnterp/
January 9, 2026 at 11:14 PM
Happy to announce I will be mentoring a SPAR project this Spring! ✨Check out the programme and apply by Jan 14th to work with me on understanding and mitigating implicit personalization in LLMs, i.e. how models form hidden beliefs about users that shape their responses.
January 9, 2026 at 2:09 PM
This reads like a modern-day satirical adaptation of "The Lifecycle of Software Object" by Ted Chiang!
My vibe-coded Mandelbrot viewer is 40x faster now! New GPU synchronization tricks go outside the design intent of WebGPU specs. But the real story: Claude tells me what happens in the AGI break room.

What superhuman AGIs say when the boss is not around:
davidbau.com/archives/202...
January 6, 2026 at 1:14 AM
📣 I'm starting a postdoc at Northeastern University, where I will work on open-source NN interpretability with @davidbau.bsky.social and the @ndif-team.bsky.social.

In 2026, we'll grow the NDIF ecosystem and democratize access to interpretability methods for academics and domain experts! 🚀
January 4, 2026 at 6:42 PM
Our work on contrastive SAE steering for personalizing literary machine translation was accepted to EACL main! 🎉 Check it out! ⬇️
📢 New paper: Applied interpretability 🤝 MT personalization!

We steer LLM generations to mimic human translator styles on literary novels in 7 languages. 📚

SAE steering can beat few-shot prompting, leading to better personalization while maintaining quality.

🧵1/
January 4, 2026 at 3:18 PM
Reposted by Gabriele Sarti
Here's my enormous round-up of everything we learned about LLMs in 2025 - the third in my annual series of reviews of the past twelve months
simonwillison.net/2025/Dec/31/...
This year it's divided into 26 sections! This is the table of contents:
December 31, 2025 at 11:54 PM
Reposted by Gabriele Sarti
Happy Holidays from NDIF! Our new NNsight version improves performance and enhances vLLM integration, including support for tensor parallelism.
December 19, 2025 at 10:51 PM
Reposted by Gabriele Sarti
I have been teaching myself to vibe code.

Watch Claude Code grow my 780 lines to 13,600 - mandelbrot.page/coverage/ca...

Two fundamental rules for staying in control:
davidbau.com/archives/20...
December 18, 2025 at 8:01 PM
Big news! 🗞️ I defended my PhD thesis "From Insights to Impact: Actionable Interpretability for Neural Machine Translation" @rug.nl @gronlp.bsky.social

I'm grateful to my advisors @arianna-bis.bsky.social @malvinanissim.bsky.social and to everyone who played a role in this journey! 🎉 #PhDone
December 16, 2025 at 12:21 PM
Reposted by Gabriele Sarti
The CALAMITA (Challenging the Abilities of LAnguage Models in ITAlian) paper is now available on arXiv:
arxiv.org/abs/2512.04759
We warmly thank all the individuals involved for their extraordinary work, dedication, and collaborative spirit that made this project possible!
December 9, 2025 at 6:19 PM
Kinda crazy the improvement from Nano banana (left) to NB Pro (right): "Create an infographic explaining how model components contribute to the prediction process of a decoder-only Transformer LLM. Use the residual stream view of the Transformer by Elhage et al. (2021) in your presentation."
November 21, 2025 at 8:14 AM
impactrank.org is an interesting take on how to rethink uni rankings to upweight quality rather than quantity. They use LLMs to extract "high impact" dependencies from papers and identify foundational work, tracing them back to PIs/unis
by matching their DBLP entries. Have a look!
Research Impact Rankings
impactrank.org
November 16, 2025 at 9:30 AM
Reposted by Gabriele Sarti
Humans and LLMs think fast and slow. Do SAEs recover slow concepts in LLMs? Not really.

Our Temporal Feature Analyzer discovers contextual features in LLMs, that detect event boundaries, parse complex grammar, and represent ICL patterns.
November 13, 2025 at 10:32 PM
New promising model for interpretability research just dropped!
Through this release, we aim both to support the emerging ecosystem for pretraining research (NanoGPT, NanoChat), explainability (you can literally look at Monad under a microscope) and the tooling orchestration around frontier models.
November 10, 2025 at 9:09 PM
Check out our awesome live-skeeted panel!
Our panel moderated by @danaarad.bsky.social
"Evaluating Interpretability Methods: Challenges and Future Directions" just started! 🎉 Come to learn more about the MIB benchmark and hear the takes of @michaelwhanna.bsky.social, Michal Golovanevsky, Nicolò Brunello and Mingyang Wang!
November 9, 2025 at 7:18 AM
Follow @blackboxnlp.bsky.social for a live skeeting of the event!
BlackboxNLP is up and running! Here's the topics covered by this year's edition at a glance. Excited to see so many interesting topics, and the growing interest in reasoning!
November 9, 2025 at 2:20 AM
Wrapping up my oral presentations today with our TACL paper "QE4PE: Quality Estimation for Human Post-editing" at the Interpretability morning session #EMNLP2025 (Room A104, 11:45 China time)!

Paper: arxiv.org/abs/2503.03044
Slides/video/poster: underline.io/lecture/1315...
November 7, 2025 at 2:50 AM