banner
kylelwiggers.bsky.social
@kylelwiggers.bsky.social
Ai2 Comms Lead | [email protected] | Pronouns: he/him
Reposted
Since launching Open Coding Agents, it's been exciting to see how quickly the community has adopted them. Today we're releasing SERA-14B – a new 14B-parameter coding model – plus a major refresh of our open training datasets. 🧵
February 3, 2026 at 5:39 PM
Reposted
Introducing Theorizer: Turning thousands of papers into scientific laws 📚➡️📜

Most automated discovery systems focus on experimentation. Theorizer tackles the other half of science: theory building—compressing scattered findings into structured, testable claims. 🧵
January 28, 2026 at 6:37 PM
Here's just one of the cool apps you can vibe-code with SERA, our new agentic coding model! I was lucky enough to get my hands on it early and it's quite capable via Claude Code. Give it a go today!
January 27, 2026 at 8:29 PM
Reposted
Introducing Ai2 Open Coding Agents—starting with SERA, our first-ever coding models. Fast, accessible agents (8B–32B) that adapt to any repo, including private codebases. Train a powerful specialized agent for as little as ~$400, & it works with Claude Code out of the box. 🧵
January 27, 2026 at 4:13 PM
Reposted
Introducing HiRO-ACE: an AI framework that makes highly detailed climate simulations dramatically more accessible. It generates decades of high-resolution precipitation data for any region in a day on a single GPU—no supercomputing cluster required. 🧵
January 21, 2026 at 7:34 PM
Reposted
Last year Molmo set SOTA on image benchmarks + pioneered image pointing. Millions of downloads later, Molmo 2 brings Molmo’s grounded multimodal capabilities to video 🎥—and leads many open models on challenging industry video benchmarks. 🧵
December 16, 2025 at 4:52 PM
Reposted
Introducing Bolmo, a new family of byte-level language models built by "byteifying" our open Olmo 3—and to our knowledge, the first fully open byte-level LM to match or surpass SOTA subword models across a wide range of tasks. 🧵
December 15, 2025 at 5:19 PM
Reposted
Olmo 3.1 is here. We extended our strongest RL run and scaled our instruct recipe to 32B—releasing Olmo 3.1 Think 32B & Olmo 3.1 Instruct 32B, our most capable models yet. 🧵
December 12, 2025 at 5:14 PM
Reposted
Update: DataVoyager, which we launched in Preview early this fall, is now available in Asta. 🎉
You can upload real datasets, ask complex research questions in natural language, & get back reproducible answers + visualizations. 🔍📊
December 8, 2025 at 8:47 PM
Reposted
Olmo 3 is now available through @hf.co Inference Providers, thanks to Public AI! 🎉
This means you can run our fully open 7B and 32B models — including Think and Instruct variants — via serverless API with no infrastructure to manage.
November 28, 2025 at 4:50 PM
Reposted
Our Olmo 3 models are now available via API on
@openrouter.bsky.social. Try Olmo 3-Instruct (7B) for chat & tool use, and our reasoning models Olmo-3 Think (7B & 32B) for more complex problems.
November 22, 2025 at 1:58 AM
Reposted
Announcing Olmo 3, a leading fully open LM suite built for reasoning, chat, & tool use, and an open model flow—not just the final weights, but the entire training journey.
Best fully open 32B reasoning model & best 32B base model. 🧵
November 20, 2025 at 2:37 PM
Reposted
Today we’re releasing Deep Research Tulu (DR Tulu)—the first fully open, end-to-end recipe for long-form deep research, plus an 8B agent you can use right away. Train agents that plan, search, synthesize, & cite across sources, making expert research more accessible. 🧭📚
November 18, 2025 at 3:31 PM
Reposted
Introducing OlmoEarth 🌍, state-of-the-art AI foundation models paired with ready-to-use open infrastructure to turn Earth data into clear, up-to-date insights within hours—not years.
November 4, 2025 at 2:52 PM
Reposted
Our fully open Olmo models enable rigorous, reproducible science—from unlearning to clinical NLP, math learning, & fresher knowledge. Here’s how the research community has leveraged Olmo to make the entire AI ecosystem better + more transparent for all. 🧵
October 24, 2025 at 6:36 PM
Reposted
We’re updating olmOCR, our model for turning PDFs & scans into clean text with support for tables, equations, handwriting, & more. olmOCR 2 uses synthetic data + unit tests as verifiable rewards to reach state-of-the-art performance on challenging documents. 🧵
October 22, 2025 at 4:09 PM
Reposted
📊 Today we're releasing data showing which scientific papers our AI research tool Asta cites most frequently. Think of it as creating citation counts for the AI era—tracking which research is actually powering AI answers across thousands of queries. 🧵
October 8, 2025 at 6:26 PM
Reposted
Introducing Asta DataVoyager—our new AI capability in Asta that turns structured data into transparent, reproducible insights. Built for scientists, grounded in open, inspectable workflows. 🧵
October 1, 2025 at 1:02 PM
Reposted
"We check in more open-source [AI] in the world than just anybody, its just one other company, Ai2"

Jensen Huang on Nvidia's open models/datasets
September 28, 2025 at 1:18 AM
Reposted
🎙️ Say hello to OLMoASR—our fully open, from-scratch speech-to-text (STT) model. Trained on a curated audio-text set, it boosts zero-shot ASR and now powers STT in the Ai2 Playground. 👇
August 28, 2025 at 4:13 PM
Reposted
Today we’re releasing agent-baselines, a suite of 22 classes of AI agents for science—including 9 open-source research-tuned agents like our state-of-the-art, benchmark-leading Asta v0. 🚀🔬
Part of our Asta ecosystem to advance scientific AI. 👇
August 26, 2025 at 7:45 PM
Reposted
As part of Asta, our initiative to accelerate science with trustworthy AI agents, we built AstaBench—the first comprehensive benchmark to compare them. ⚖️
August 26, 2025 at 3:02 PM
Reposted
Introducing Asta—our bold initiative to accelerate science with trustworthy, capable agents, benchmarks, & developer resources that bring clarity to the landscape of scientific AI + agents. 🧵
August 26, 2025 at 1:05 PM
Reposted
LLMs power research, decision‑making, and exploration—but most benchmarks don’t test how well they stitch together evidence across dozens (or hundreds) of sources. Meet MoNaCo, our new eval for question-answering cross‑source reasoning. 👇
August 18, 2025 at 4:05 PM