NDIF Team
banner
ndif-team.bsky.social
NDIF Team
@ndif-team.bsky.social
The National Deep Inference Fabric, an NSF-funded computational infrastructure to enable research on large-scale Artificial Intelligence.

πŸ”— NDIF: https://ndif.us
🧰 NNsight API: https://nnsight.net
😸 GitHub: https://github.com/ndif-team/nnsight
Reposted by NDIF Team
What does an LLM do when it translates from Italian "amore" to Spanish "amor" or French "amour"?

That's easy! (you might think) Because surely it knows: amore, amor, amour are all based on the same Latin word. It can just drop the "e", or add a "u".
October 11, 2025 at 12:02 PM
Reposted by NDIF Team
Making model internals accessible to domain experts in low-code interfaces will unlock the next step in making interpretability useful across a variety of domains. Very excited about the NDIF Workbench! πŸ’‘
Ever wished you could explore what's happening inside a 405B parameter model without writing any code? Workbench, our AI interpretability interface, is now live for public beta at workbench.ndif.us!
October 10, 2025 at 5:53 PM
πŸ‘€ More advanced interpretability tools coming soon. What techniques would you like to see? Reach out or drop suggestions in the form.
October 10, 2025 at 5:36 PM
This is a public beta, so we expect bugs and actively want your feedback: forms.gle/WsxmZikeLNw3...
NDIF Workbench Feedback
Thank you for taking the time to submit your feedback! Every little bit helps.
forms.gle
October 10, 2025 at 5:36 PM
Study any NDIF-hosted model (including Llama 405B) directly in your browser. Our first tool, Logit Lens, lets you peer inside LLM computations layer-by-layer. Watch the full demo on YouTube (www.youtube.com/watch?v=BK-q...) or try it yourself: workbench.ndif.us
Workbench Logit Lens Demo
YouTube video by NDIF Team
www.youtube.com
October 10, 2025 at 5:36 PM
Ever wished you could explore what's happening inside a 405B parameter model without writing any code? Workbench, our AI interpretability interface, is now live for public beta at workbench.ndif.us!
October 10, 2025 at 5:35 PM
πŸ‘€ More advanced interpretability tools coming soon. What techniques would you like to see? Reach out or drop suggestions in the form.
October 10, 2025 at 5:34 PM
This is a public beta, so we expect bugs and actively want your feedback: forms.gle/WsxmZikeLNw...
NDIF Workbench Feedback
Thank you for taking the time to submit your feedback! Every little bit helps.
docs.google.com
October 10, 2025 at 5:34 PM
Read the paper or play around with some demos on the project website!

ArXiv: arxiv.org/abs/2410.22366
Project Website: sdxl-unbox.epfl.ch/
October 3, 2025 at 6:45 PM
New YouTube video posted! @wendlerc.bsky.social presents his work using SAEs for diffusion text-to-image models. The authors find interpretable SAE features and demonstrate how these features can alter generated images.

Watch here: youtu.be/43NnaqGjArA
Interpreting SDXL Turbo Using Sparse Autoencoders with Chris Wendler
In this talk, Chris Wendler presents his recent work on using sparse autoencoders for diffusion models. In this work, they train SAEs on SDXL Turbo, finding ...
www.youtube.com
October 3, 2025 at 6:45 PM
Reminder that today is the deadline to apply for our hot-swapping program! Be the first to test out many new models remotely on NDIF and submit your application today!

More details: ndif.us/hotswap.html
Application link: forms.gle/KHVkYxybmK12...
NSF National Deep Inference Fabric
NDIF is a research computing project that enables researchers and students to crack open the mysteries inside large-scale AI systems.
ndif.us
October 1, 2025 at 6:10 PM
Want increased remote model availability on NDIF? Interested in studying model checkpoints?

Sign up for the NDIF hot-swapping pilot by October 1st: forms.gle/Cf4WF3xiNzud...
September 26, 2025 at 6:57 PM
Participants will:

1. Be in the first cohort of users to access models beyond our whitelist
2. Directly control which models are hosted on the NDIF backend
3. Receive guided support on their project from the NDIF team
4. Give feedback, guiding future user experience
September 4, 2025 at 12:41 AM
This fall, we are running a program to test our model hot-swapping on real research projects. Projects should require internal access to multiple models, which could include model checkpoints, different model sizes, unique model architectures, or other creative approaches.
September 4, 2025 at 12:41 AM
Do you wish you could run experiments on any model remotely from your laptop? In a future release, NDIF users will be able to dynamically deploy any model from HuggingFace on NDIF for remote experimentation. But before this, we need your help!
September 4, 2025 at 12:41 AM
We are presenting our NNsight / NDIF demos at NEMI now!

Tune in:
youtube.com/live/q8Su4C...
New England Mechanistic Interpretability Workshop
09:30 AM - 09:40 AM: Opening Remarks (David Bau) 09:40 AM - 10:00 AM: Keynote 1: Lee Sharkey: "Mech Interp: Where should we go from here?" 10:00 AM - 10:10 A...
www.youtube.com
August 22, 2025 at 7:28 PM
The NEMI conference is live!

Watch our livestream here: youtube.com/live/q8Su4C...
NEMI 2025
www.youtube.com
August 22, 2025 at 1:40 PM
Reposted by NDIF Team
This Friday NEMI 2025 is at Northeastern in Boston, 8 talks, 24 roundtables, 90 posters; 200+ attendees. Thanks to
goodfire.ai/ for sponsoring! nemiconf.github.io/summer25/

If you can't make it in person, the livestream will be here:
www.youtube.com/live/4BJBis...
New England Mechanistic Interpretability Workshop
About:The New England Mechanistic Interpretability (NEMI) workshop aims to bring together academic and industry researchers from the New England and surround...
www.youtube.com
August 18, 2025 at 6:06 PM
Reposted by NDIF Team
Announcing a deep net interpretability talk series!

Every week you will find new talks on recent research in the science of neural networks. The first few are posted: jackmerullo.bsky.social, Roy Rinberg, and me.

At the @ndif-team.bsky.social Youtube Channel: www.youtube.com/@NDIFTeam
NDIF Team
We're a research computing project cracking open the mysteries inside large-scale AI systems. The NSF National Deep Inference Fabric consists of a unique combination of hardware and software that provides a remotely-accessible computing resource for scientists and students to perform detailed and reproducible experiments on large pretrained AI models, such as open large language models. We aim to make AI interpretability research more accessible through this channel by publishing lectures and educational content covering real interpretability research.
www.youtube.com
August 18, 2025 at 6:02 PM
We will use this channel to post lectures on AI interpretability research, educational information, NDIF and NNsight updates, and more. If you're interested in collaborating on a video or would like to suggest a topic, please reach out!
August 7, 2025 at 5:36 PM
Our YouTube channel is live! Our first video features @davidbau.bsky.social‬ presenting the ROME project:
www.youtube.com/watch?v=eKd...
ROME: Locating and Editing Factual Associations in GPT with David Bau
David Bau is an Assistant Professor of Computer Science at Northeastern University's Khoury College. His lab studies the structure and interpretation of deep...
www.youtube.com
August 7, 2025 at 5:36 PM
Want to try it for yourself? Check out our new mini-paper tutorial in NNsight to see how intervening on concept induction heads can reveal language-invariant concepts and cause a model to paraphrase text!

πŸ”— nnsight.net/notebooks/m...
August 5, 2025 at 4:31 PM
Using causal mediation analysis on words that span multiple tokens, @sfeucht.bsky.social et al. found concept induction heads that are separate from token induction heads.

πŸ”— dualroute.baulab.info/
August 5, 2025 at 4:31 PM