Ameya P.
@bayesiankitten.bsky.social
570 followers 130 following 15 posts
Postdoctoral Researcher @ Bethgelab, University of Tübingen Benchmarking | LLM Agents | Data-Centric ML | Continual Learning | Unlearning drimpossible.github.io
Posts Media Videos Starter Packs
Reposted by Ameya P.
elliot-eu.bsky.social
🚀 A new era in European #AIresearch begins!

ELLIOT is a €25M #HorizonEurope project launching July 2025 to build open, trustworthy Multimodal Generalist Foundation Models.
30 partners, 12 countries, EU values.

🔗 Press release: apigateway.agilitypr.com/distribution...
Reposted by Ameya P.
andreasgeiger.bsky.social
🚀 Never miss a beat in science again!

📬 Scholar Inbox is your personal assistant for staying up to date with your literature. It includes: visual summaries, collections, search and a conference planner.

Check out our white paper: arxiv.org/abs/2504.08385
#OpenScience #AI #RecommenderSystems
Reposted by Ameya P.
ahochlehnert.bsky.social
🧵1/ 🚨 New paper: A Sober Look at Progress in Language Model Reasoning
We re-evaluate recent SFT and RL models for mathematical reasoning and find most gains vanish under rigorous, multi-seed, standardized evaluation.

📊 bethgelab.github.io/sober-reason...
📄 arxiv.org/abs/2504.07086
Reposted by Ameya P.
cslg-bot.bsky.social
Hochlehnert, Bhatnagar, Udandarao, Albanie, Prabhu, Bethge: A Sober Look at Progress in Language Model Reasoning: Pitfalls and Paths to Reproducibility https://arxiv.org/abs/2504.07086 https://arxiv.org/pdf/2504.07086 https://arxiv.org/html/2504.07086
bayesiankitten.bsky.social
Great work! A much-needed upgrade for continual learning datasets—excited to see progress on long-timespan tasks beyond classification. Deets below👇
lukasthede.bsky.social
🧠 Keeping LLMs factually up to date is a common motivation for knowledge editing.

But what would it actually take to support this in practice at the scale and speed the real world demands?

We explore this question and really push the limits of lifelong knowledge editing in the wild.
👇
bayesiankitten.bsky.social
Deadline extended to March 19 for the EVAL-FoMo workshop @cvprconference.bsky.social! We welcome submissions (incl. published papers) analyzing emerging capabilities & limits in visual foundation models.

Details: sites.google.com/view/eval-fo...
#CVPR2025
bayesiankitten.bsky.social
LMs excel at solving problems (~48% success) but falter at debunking them (<9% counterexample rate)!

Could form an AI Brandolini's Law: "Capability needed to refute bullshit is far larger than that needed to generate it"
shiven-s.bsky.social
AI can generate correct-seeming hypotheses (and papers!). Brandolini's law states BS is harder to refute than generate. Can LMs falsify incorrect solutions? o3-mini (high) scores just 9% on our new benchmark REFUTE. Verification is not necessarily easier than generation 🧵
Reposted by Ameya P.
shiven-s.bsky.social
AI can generate correct-seeming hypotheses (and papers!). Brandolini's law states BS is harder to refute than generate. Can LMs falsify incorrect solutions? o3-mini (high) scores just 9% on our new benchmark REFUTE. Verification is not necessarily easier than generation 🧵
Reposted by Ameya P.
prasannamayil.bsky.social
New preprint out! 🎉

How does LLM training loss translate to downstream performance?

We show that pretraining data and tokenizer shape loss-to-loss scaling, while architecture and other factors play a surprisingly minor role!
brendel-group.github.io/llm-line/ 🧵1/8
bayesiankitten.bsky.social
CuratedThoughts: Data curation focus for RL post-training! (Update 1) 🚀

25% of Openthoughts-114k-math filtered — issues included proofs, missing figures, and multiple questions with one answer.

Check out work by
@ahochlehnert.bsky.social & @hrdkbhatnagar.bsky.social
below 👇
ahochlehnert.bsky.social
CuratedThoughts: Data Curation for RL Datasets 🚀

Since DeepSeek-R1 introduced reasoning-based RL, datasets like Open-R1 & OpenThoughts emerged for fine-tuning & GRPO. Our deep dive found major flaws — 25% of OpenThoughts needed elimination by data curation.

Here's why 👇🧵
Reposted by Ameya P.
askoepke.bsky.social
Our 2nd Workshop on Emergent Visual Abilities and Limits of Foundation Models (EVAL-FoMo) is accepting submissions. We are looking forward to talks by our amazing speakers that include @saining.bsky.social, @aidanematzadeh.bsky.social, @lisadunlap.bsky.social, and @yukimasano.bsky.social. #CVPR2025
bayesiankitten.bsky.social
🔥 #CVPR2025 Submit your cool papers to Workshop on
Emergent Visual Abilities and Limits of Foundation Models 📷📷🧠🚀✨

sites.google.com/view/eval-fo...

Submission Deadline: March 12th!
EVAL-FoMo 2
A Vision workshop on Evaluations and Analysis
sites.google.com
bayesiankitten.bsky.social
🔥 #CVPR2025 Submit your cool papers to Workshop on
Emergent Visual Abilities and Limits of Foundation Models 📷📷🧠🚀✨

sites.google.com/view/eval-fo...

Submission Deadline: March 12th!
EVAL-FoMo 2
A Vision workshop on Evaluations and Analysis
sites.google.com
bayesiankitten.bsky.social
LMs are used for annotation, evaluation and distillation! We identify critical issues!

LMs of a similar capability class (not model family tho!) behave similarly and this skews oversight far more than I expected.

Check the 4-in-1 mega paper below to 👀 how 👇
joschkastrueber.bsky.social
🚨Great Models Think Alike and this Undermines AI Oversight🚨
New paper quantifies LM similarity
(1) LLM-as-a-judge favor more similar models🤥
(2) Complementary knowledge benefits Weak-to-Strong Generalization☯️
(3) More capable models have more correlated failures 📈🙀
🧵👇
bayesiankitten.bsky.social
Can better representation learning help? No!

RanDumb recovers 70-90% of the joint performance.

Forgetting isn't the main issue—the benchmarks are too toy!

Key Point: Current OCL benchmarks are too constrained for any effective learning of online continual representations!
bayesiankitten.bsky.social
Across a wide range of online continual learning benchmarks-- RanDumb consistently surpasses prior methods (even latest contrastive & meta strategies), often by surprisingly large margins!
bayesiankitten.bsky.social
Continual Learning assumes deep representations learned outperform old school kernel classifiers (as in supervised DL). But this isn't validated!!

Why might it not work? Updates are limited and networks may not converge.

We find: OCL representations are severely undertrained!
bayesiankitten.bsky.social
How RanDumb works: Fix a random embedder to transform raw pixels. Train a linear classifier on top—single pass, one sample at a time, no stored exemplars. Order-invariant, worst-case ready🚀

Looks familiar? This is streaming (approx.) Kernel LDA!!
bayesiankitten.bsky.social
New Work: RanDumb!🚀

Poster @NeurIPS, East Hall #1910- come say hi👋

Core claim: Random representations Outperform Online Continual Learning Methods!

How: We replace the deep network by a *random projection* and linear clf, yet outperform all OCL methods by huge margins [1/n]
Reposted by Ameya P.
bayesiankitten.bsky.social
Breaking the 8-model merge limit was tough, but we scaled to merging 200+ models! The secret? Iterative finetuning + merging *over time*.

The time axis unlocks scalable mergeability. Merging has surprising scaling gains across size & compute budgets.

All the gory details ⬇️
dziadzio.bsky.social
📄 New Paper: "How to Merge Your Multimodal Models Over Time?"

arxiv.org/abs/2412.06712

Model merging assumes all finetuned models are available at once. But what if they need to be created over time?

We study Temporal Model Merging through the TIME framework to find out!

🧵
How to Merge Your Multimodal Models Over Time?
Model merging combines multiple expert models - finetuned from a base foundation model on diverse tasks and domains - into a single, more capable model. However, most existing model merging approaches...
arxiv.org
bayesiankitten.bsky.social
How do we benchmark the vast capabilities of foundation models? Introducing ONEBench – a unifying benchmark to test them all, led by
@adhirajghosh.bsky.social and
@dziadzio.bsky.social!⬇️

Sample-level benchmarks could be the new generation- reusable, recombinable & evaluate lots of capabilities!
adhirajghosh.bsky.social
🚨Looking to test your foundation model on an arbitrary and open-ended set of capabilities, not explicitly captured by static benchmarks? 🚨

Check out ✨ONEBench✨, where we show how sample-level evaluation is the solution.

🔎 arxiv.org/abs/2412.06745
bayesiankitten.bsky.social
Come chat with us @ NeurIPS for hot takes on the future of continual learning with foundation models!
confusezius.bsky.social
😵‍💫 Continually pretraining large multimodal models to keep them up-to-date all-the-time is tough, covering everything from adapters, merging, meta-scheduling to data design and more!

So I'm really happy to present our large-scale study at #NeurIPS2024!

Come drop by to talk about all that and more!
Reposted by Ameya P.