Donatella Genovese
@donatellag.bsky.social
60 followers 170 following 7 posts
PhD Student | Works on Explainable AI | https://donatellagenovese.github.io/
Posts Media Videos Starter Packs
Reposted by Donatella Genovese
neuralnoise.com
Please share it within your circles! edin.ac/3DDQK1o
donatellag.bsky.social
Really cool paper by @kayoyin.bsky.social about interpretability of In Context Learning, they found that Function Vectors (FV) heads are crucial for few-shot ICL.
www.arxiv.org/abs/2502.14010
donatellag.bsky.social
A really nice resource for understanding how to parallelize LLM training.
thomwolf.bsky.social
After 6+ months in the making and over a year of GPU compute, we're excited to release the "Ultra-Scale Playbook": hf.co/spaces/nanot...

A book to learn all about 5D parallelism, ZeRO, CUDA kernels, how/why overlap compute & coms with theory, motivation, interactive plots and 4000+ experiments!
The Ultra-Scale Playbook - a Hugging Face Space by nanotron
The ultimate guide to training LLM on large GPU Clusters
hf.co
donatellag.bsky.social
3/ Interleaving Concepts with Token Embeddings

🔹 Predicted concepts are compressed into a continuous vector 🎯
🔹 They are then inserted into hidden states alongside token embeddings 🧩
donatellag.bsky.social
2/ Training the Model with Dual Objectives

🔹 Next-token prediction – the standard LLM training objective.
🔹 Concept prediction – the model learns to reproduce extracted concepts from its hidden state.
donatellag.bsky.social
1/ Concept Extraction with SAE

🔹 A Sparse Autoencoder (SAE) extracts high-level concepts from the hidden states of a pretrained LLM.
🔹 Only the most important concepts are selected based on their attribution score (impact on model output).
donatellag.bsky.social
🚀 Meta’s new LLM pretraining framework predicts concepts and integrates them into its hidden state to enhance next-token prediction. 🚀

It achieves the same performance with 21.5% fewer tokens and better generalization! 🎯

📝: arxiv.org/abs/2502.08524
donatellag.bsky.social
A very interesting work that explores the possibility of having a unified interpretation across multiple models
hthasarathan.bsky.social
🌌🛰️🔭Wanna know which features are universal vs unique in your models and how to find them? Excited to share our preprint: "Universal Sparse Autoencoders: Interpretable Cross-Model Concept Alignment"!

arxiv.org/abs/2502.03714

(1/9)
Reposted by Donatella Genovese
sscardapane.bsky.social
*MoE Graph Transformers for Interpretable Particle Collision Detection*
by @alessiodevoto.bsky.social @sgiagu.bsky.social et al.

We propose a MoE graph transformer for particle collision analysis, with many nice interpretability insights (e.g., expert specialization).

arxiv.org/abs/2501.03432