Andrei Mircea
@mirandrom.bsky.social
230 followers 150 following 20 posts
PhD student at University of Montreal // Mila ··· mechanistic understanding of LLMs + Human-AI collaboration for science ··· http://mirandrom.github.io
Posts Media Videos Starter Packs
Pinned
mirandrom.bsky.social
Step 1: Understand how scaling improves LLMs.
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.

In our ACL 2025 paper we look at Step 1 in terms of training dynamics.

Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
mirandrom.bsky.social
not really sure what that implies with respect to their results, but it's a surprising contrast with no obvious explanation
mirandrom.bsky.social
we also found interesting differences in optimizers with respect to loss deceleration (www.arxiv.org/abs/2506.05447). Surprisingly, Muon had worse post-deceleration convergence. That suggests it exacerbates rather than reduces interference in language modeling despite being a second order optimizer.
mirandrom.bsky.social
Thanks to my collaborators and mentors @katelobacheva.bsky.social, Irina Rish, Supriyo Chakraborty, and Nima Chitsazan.

Also Ashwinee Panda for coining "zero-sum learning", which is honestly a pretty great name.
mirandrom.bsky.social
All of our code and artefacts are also open, which hopefully will help.

Code: github.com/mirandrom/zsl
Checkpoints: huggingface.co/mirandrom/zs...
Wandb logs: wandb.ai/amr-amr/zsl/...
mirandrom.bsky.social
TL;DR We find two new phenomena (loss deceleration + zero-sum learning) and show quantifiably how scaling improves LLMs by mitigating these.

What’s cool is that these could potentially be mitigated independent of scaling (Step 2).
Exactly how to do this remains an open question.
mirandrom.bsky.social
Step 1: Understand how scaling improves LLMs.
Step 2: Directly target underlying mechanism.
Step 3: Improve LLMs independent of scale. Profit.

In our ACL 2025 paper we look at Step 1 in terms of training dynamics.

Project: mirandrom.github.io/zsl
Paper: arxiv.org/pdf/2506.05447
mirandrom.bsky.social
Mechanistic understanding of systematic failures in language models is something more research should strive for IMO. This is really interesting work in that vein by @ziling-cheng.bsky.social, highly recommend you check it out.
ziling-cheng.bsky.social
Do LLMs hallucinate randomly? Not quite.

Our #ACL2025 (Main) paper shows that hallucinations under irrelevant contexts follow a systematic failure mode — revealing how LLMs generalize using abstract classes + context cues, albeit unreliably.

📎 Paper: arxiv.org/abs/2505.22630 1/n
Reposted by Andrei Mircea
ziling-cheng.bsky.social
Do LLMs hallucinate randomly? Not quite.

Our #ACL2025 (Main) paper shows that hallucinations under irrelevant contexts follow a systematic failure mode — revealing how LLMs generalize using abstract classes + context cues, albeit unreliably.

📎 Paper: arxiv.org/abs/2505.22630 1/n
mirandrom.bsky.social
Special thanks to @katelobacheva.bsky.social and Irina Rish from @mila-quebec.bsky.social for their supervision; and to Nima Chitsazan and Supriyo Chakraborty from CapitalOne for their support on this project during my summer internship there!
mirandrom.bsky.social
🧵 (12/N) If you’re still reading, here are some neat plots to express my gratitude. These are per-token loss landscape cross-sections, taken along weight update directions at different train steps. Also equivalent cross-sections of overall losses extruded in 3D because why not.
mirandrom.bsky.social
🧵 (11/N) While our hypothesis and results confirm that there exists mechanisms underlying scaling improvements that can be targeted directly and independent of scale, they do not fully account for the effect of scaling on loss deceleration. This is something we’re working on!
mirandrom.bsky.social
🧵 (10/N) We also observe that scaling decreases gradient opposition before deceleration, effectively contributing to greater loss improvements before deceleration. While SGO converges to ~1 across scales, its relative effect on ZSL appears to be mitigated by scale.
mirandrom.bsky.social
🧵 (9/N) Explaining ZSL with systematic gradient opposition (SGO)

In our paper, we show how SGO (when destructive interference in per-example gradients approaches 1) fundamentally results in ZSL, and confirm it occurs with and explains deceleration.
mirandrom.bsky.social
🧵 (8/N) To go beyond co-occurrence, we disentangle the relative contribution of ZSL to slowing loss improvements and show that it is indeed the principal contributor to loss deceleration across scales.
mirandrom.bsky.social
🧵 (7/N) To quantify ZSL, we define destructive interference as the rate with which elements in a sum cancel-out, and measure it for per-example loss improvements throughout training. Consistent with our hypothesis, ZSL occurs with deceleration and is decreased by scale.
mirandrom.bsky.social
🧵 (6/N) In ZSL, systematic gradient opposition between tokens leads to degenerate training dynamics where improvements in one set of tokens are offset by degradation in another, bottlenecking the overall rate of improvement and leading to deceleration.
mirandrom.bsky.social
🧵 (5/N) Explaining loss deceleration with zero-sum learning
In other words, by explaining loss deceleration (and the mitigating effect of scale) we can explain scaling improvements. We propose the zero-sum learning (ZSL) hypothesis as an explanation for deceleration.
mirandrom.bsky.social
🧵 (4/N) Specifically, scaling seems to improve loss by improving 1) the loss at which deceleration occurs; and 2) the log-log rate of loss improvement after deceleration. Using BNSL, we can measure these quantities and tie them to final loss (i.e. to scaling improvements).
mirandrom.bsky.social
🧵 (3/N) Explaining scaling improvements with loss deceleration
Scaling improvements can be expressed in terms of mitigating “loss deceleration”, an abrupt slowdown in the rate of loss improvement; characterized by piecewise linear log-log loss curves.
mirandrom.bsky.social
🧵 (2/12) Motivation
LLM scaling laws predict but do not explain *how* scaling model size improves loss.
By identifying a mechanism underlying scaling improvements, we could target it directly and potentially improve LLMs independent of scale.
mirandrom.bsky.social
📢 New paper “Language model scaling laws and zero-sum learning” at Sci4DL #neurips2024.

ℹ️ openreview.net/forum?id=yBq2g832Go TL;DR: scaling improves LMs by mitigating zero-sum learning, a mechanism that could be targeted directly and independent of scale.

West 205-207 4:30-5:30 PM

🧵 (1/12)
Reposted by Andrei Mircea
muawizc.bsky.social
My collaborators (Vivian White, @kamdh.bsky.social) will be presenting our work at the #Sci4DL workshop at #NeurIPS2024 today.

Location: West Meeting Room 205-207
Time: 4:30-5:30 PM

We present a principled probability distribution model of pre-trained deep neural networks. Check it out!
Reposted by Andrei Mircea
dippedrusk.com
For those of you attending #NeurIPS2024 in person: I'm from Vancouver and I made an extensive list of restaurants, bars, bookstores, etc., that I used to frequent when I still lived there. Enjoy!
dippedrusk.com/posts/2024-0...
Vagrant's Vancouver | Vagrant Gautam
A non-comprehensive list of places to go and things to do in the Greater Vancouver Area as curated by yours truly over 6 years. Might be outdated so please double-check!
dippedrusk.com
Reposted by Andrei Mircea
lasha.bsky.social
✨I am on the faculty job market in the 2024-2025 cycle!✨

My research centers on advancing Responsible AI, specifically enhancing factuality, robustness, and transparency in AI systems.

If you have relevant positions, let me know! lasharavichander.github.io Please share/RT!
Abhilasha Ravichander - Home
lasharavichander.github.io
Reposted by Andrei Mircea
benn9.bsky.social
✨EMNLP Paper! ✨
Have you ever constructed a table to organize your literature review process? Can we use LMs to generate these automatically?

We are excited to present ArxivDIGESTables 🍽️ a study of collecting, generating, and evaluating 🎓 scientific literature review tables 📃!
A screenshot of the first page of the paper discussed in the thread. Figure 1 contains a set of three cartoon papers with related text highlighted in three different colors. To its left, there's an arrow pointing to a cartoon table with a column corresponding to each color and a row corresponding to each paper.