Jonas
@jonasgeiping.bsky.social
270 followers 150 following 11 posts
ML research, safety & efficiency
Posts Media Videos Starter Packs
jonasgeiping.bsky.social
Finally, this project was made possible by the INCITE program of the DoE, who sponsored our compute on the OLCF Frontier supercomputer. Without them, we could not have done open research at this scale!
jonasgeiping.bsky.social
Thank you to all of my collaborators, @sean-mcleish.bsky.social , Neel Jain, jwkirchenbauer.bsky.social, Siddharth Singh, Brian Bartoldson, Bhavya Kailkhura, Abhinav Bhatele and especially Tom Goldstein, for doing this.

This really was a long project for us, with initial starts in Summer '23!
jwkirchenbauer.bsky.social
PhD Student at University of Maryland, advised by @tomgoldstein.bsky.social. jwkirchenbauer.notion.site
jwkirchenbauer.bsky.social
jonasgeiping.bsky.social
What is it doing when it thinks longer?

We find evidence for pretty advanced structures in latent space, such as the tendency to use orbitals (see picture) to compute arithmetic tasks and reasoning about sentence structure

So, this model really is rotating shapes in a high-dimensional space?
jonasgeiping.bsky.social
What is pretty exciting is that simply by training with our arch and objective, a separation emerges from scale - the model's latents converge quicker for some tokens in a sentence than others,

In this figure the model takes more time to think about the key parts of the text:
jonasgeiping.bsky.social
We had enough compute for only a single shot to train at scale (and that is the model we've published).

On reasoning tasks like GSM8k, the model is pretty competitive, even compared to other pretrained open-source models, even though we have done no post/mid-training...
jonasgeiping.bsky.social
First, the model (with 3.5B params), even though trained semi-optimally, and for 800B tokens, is competive with 7B open-source models trained for 2-3T tokens (OLMo-v1) - but we can't beat the new OLMo data recipe (yet)

This is pretty exciting, for our first large-scale run
jonasgeiping.bsky.social
has something for everyone, new model architecture, optimizer details, AMD training (we trained on 4096 AMD GPUs), our data pipeline, and lots of analysis!

Here are a few of my highlights:
jonasgeiping.bsky.social
Ok, so I can finally talk about this!

We spent the last year (actually a bit longer) training an LLM with recurrent depth at scale.

The model has an internal latent space in which it can adaptively spend more compute to think longer.

I think the tech report ...🐦‍⬛
Reposted by Jonas
tomgoldstein.bsky.social
New open source reasoning model!

Huginn-3.5B reasons implicitly in latent space 🧠

Unlike O1 and R1, latent reasoning doesn’t need special chain-of-thought training data, and doesn't produce extra CoT tokens at test time.

We trained on 800B tokens 👇
Reposted by Jonas
knutjaegersberg.bsky.social
Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach 🚀🚀🚀

arxiv.org/abs/2502.05171
jonasgeiping.bsky.social
I'm at NeurIPS in Vancouver right now! Feel free to reach out to talk about anything in LLM safety or efficiency research.

Also, our new ELLIS institute Tübingen is hiring new faculty, the deadline is next week - reach out to us in person and at our booth for more info 🇪🇺🇪🇺🇪🇺
Principal Investigators (m/f/d) as Hector Endowed Fellows of the ELLIS Institute Tübingen
institute-tue.ellis.eu