Nathan Lambert
banner
natolambert.bsky.social
Nathan Lambert
@natolambert.bsky.social
A LLN - large language Nathan - (RL, RLHF, society, robotics), athlete, yogi, chef
Writes http://interconnects.ai
At Ai2 via HuggingFace, Berkeley, and normal places
Really 7B is the scale where a lot of meaningful research emerges, and good for large scale routine tasks quite often
November 20, 2025 at 3:27 PM
allenai.org
November 20, 2025 at 2:32 PM
As with OLMo 2 32B at its release, OLMo 3 32B is the best open-source language model ever released. It’s an awesome privilege to get to provide these models to the broader community researching and understanding what is happening in AI today.
November 20, 2025 at 2:32 PM
As always, all of our models come with full training data, code, intermediate checkpoints, training logs, and a detailed technical report. All are available today, with some more additions coming before the end of the year.
November 20, 2025 at 2:32 PM
This is a big milestone for Ai2 and the Olmo project. These aren’t huge models (more on that later), but it’s crucial for the viability of fully open-source models that they are competitive on performance – not just replications of models that came out 6 to 12 months ago.
November 20, 2025 at 2:32 PM
Yes will be fixed in print, these are mock ups
November 15, 2025 at 12:34 AM
Those only work on cats, my dog would love that
November 14, 2025 at 9:34 PM
And if you missed it :) rlhfbook.com
RLHF Book by Nathan Lambert
The Reinforcement Learning from Human Feedback Book
rlhfbook.com
November 14, 2025 at 9:04 PM
I described this as "Calibration" in my taxonomy of next-generation reasoning models. My taxonomy is here: www.interconnects.ai/p/next-gen-r...
A taxonomy for next-generation reasoning models
Where we've been and where we're going with RLVR.
www.interconnects.ai
November 13, 2025 at 7:18 PM