banner
kylelwiggers.bsky.social
@kylelwiggers.bsky.social
Ai2 Comms Lead | [email protected] | Pronouns: he/him
Reposted
Data mixing – determining how much web text, code, math, etc., you need for LM development – is a first-order lever on model quality. Introducing Olmix: a framework for configuring mixing methods at the start of dev & efficiently updating as data changes throughout. 🧵
February 13, 2026 at 4:34 PM
Reposted
Knowing which questions to ask is often the hardest part of science. Today we're releasing AutoDiscovery in AstaLabs, an AI system that starts with your data and generates its own hypotheses. 🧪
February 12, 2026 at 4:06 PM
Reposted
Introducing MolmoSpaces, a large-scale, fully open platform + benchmark for embodied AI research. 🤖

230k+ indoor scenes, 130k+ object models, & 42M annotated robotic grasps—all in one ecosystem.
February 11, 2026 at 7:47 PM
Reposted
LLMs often generate step-by-step instructions, from real-world tasks (how do I file taxes?) to plans for AI agents. Improving this is hard: outputs can sound fluent for steps that don't work, and current datasets cover few domains.

How2Everything evals/trains for this at scale. 🧵
February 10, 2026 at 4:53 PM
Reposted
Since launching Open Coding Agents, it's been exciting to see how quickly the community has adopted them. Today we're releasing SERA-14B – a new 14B-parameter coding model – plus a major refresh of our open training datasets. 🧵
February 3, 2026 at 5:39 PM
Reposted
Introducing Theorizer: Turning thousands of papers into scientific laws 📚➡️📜

Most automated discovery systems focus on experimentation. Theorizer tackles the other half of science: theory building—compressing scattered findings into structured, testable claims. 🧵
January 28, 2026 at 6:37 PM
Here's just one of the cool apps you can vibe-code with SERA, our new agentic coding model! I was lucky enough to get my hands on it early and it's quite capable via Claude Code. Give it a go today!
January 27, 2026 at 8:29 PM
Reposted
Introducing Ai2 Open Coding Agents—starting with SERA, our first-ever coding models. Fast, accessible agents (8B–32B) that adapt to any repo, including private codebases. Train a powerful specialized agent for as little as ~$400, & it works with Claude Code out of the box. 🧵
January 27, 2026 at 4:13 PM
Reposted
Introducing HiRO-ACE: an AI framework that makes highly detailed climate simulations dramatically more accessible. It generates decades of high-resolution precipitation data for any region in a day on a single GPU—no supercomputing cluster required. 🧵
January 21, 2026 at 7:34 PM
Reposted
Last year Molmo set SOTA on image benchmarks + pioneered image pointing. Millions of downloads later, Molmo 2 brings Molmo’s grounded multimodal capabilities to video 🎥—and leads many open models on challenging industry video benchmarks. 🧵
December 16, 2025 at 4:52 PM
Reposted
Introducing Bolmo, a new family of byte-level language models built by "byteifying" our open Olmo 3—and to our knowledge, the first fully open byte-level LM to match or surpass SOTA subword models across a wide range of tasks. 🧵
December 15, 2025 at 5:19 PM
Reposted
Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32B—releasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. 🧵
December 12, 2025 at 5:14 PM
Reposted
Update: DataVoyager, which we launched in Preview early this fall, is now available in Asta. 🎉
You can upload real datasets, ask complex research questions in natural language, & get back reproducible answers + visualizations. 🔍📊
December 8, 2025 at 8:47 PM
Reposted
Olmo 3 is now available through @hf.co Inference Providers, thanks to Public AI! 🎉
This means you can run our fully open 7B and 32B models — including Think and Instruct variants — via serverless API with no infrastructure to manage.
November 28, 2025 at 4:50 PM
Reposted
Our Olmo 3 models are now available via API on
@openrouter.bsky.social. Try Olmo 3-Instruct (7B) for chat & tool use, and our reasoning models Olmo-3 Think (7B & 32B) for more complex problems.
November 22, 2025 at 1:58 AM
Reposted
Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. 🧵
November 20, 2025 at 2:37 PM
Reposted
Today we’re releasing Deep Research Tulu (DR Tulu)—the first fully open, end-to-end recipe for long-form deep research, plus an 8B agent you can use right away. Train agents that plan, search, synthesize, & cite across sources, making expert research more accessible. 🧭📚
November 18, 2025 at 3:31 PM
Reposted
Introducing OlmoEarth 🌍, state-of-the-art AI foundation models paired with ready-to-use open infrastructure to turn Earth data into clear, up-to-date insights within hours—not years.
November 4, 2025 at 2:52 PM
Reposted
Our fully open Olmo models enable rigorous, reproducible science—from unlearning to clinical NLP, math learning, & fresher knowledge. Here’s how the research community has leveraged Olmo to make the entire AI ecosystem better + more transparent for all. 🧵
October 24, 2025 at 6:36 PM
Reposted
We’re updating olmOCR, our model for turning PDFs & scans into clean text with support for tables, equations, handwriting, & more. olmOCR 2 uses synthetic data + unit tests as verifiable rewards to reach state-of-the-art performance on challenging documents. 🧵
October 22, 2025 at 4:09 PM
Reposted
📊 Today we're releasing data showing which scientific papers our AI research tool Asta cites most frequently. Think of it as creating citation counts for the AI era—tracking which research is actually powering AI answers across thousands of queries. 🧵
October 8, 2025 at 6:26 PM
Reposted
Introducing Asta DataVoyager—our new AI capability in Asta that turns structured data into transparent, reproducible insights. Built for scientists, grounded in open, inspectable workflows. 🧵
October 1, 2025 at 1:02 PM
Reposted
"We check in more open-source [AI] in the world than just anybody, its just one other company, Ai2"

Jensen Huang on Nvidia's open models/datasets
September 28, 2025 at 1:18 AM