Ibrahim Alabdulmohsin
@ibomohsin.bsky.social
110 followers 98 following 14 posts
AI research scientist at Google Deepmind, Zürich
Posts Media Videos Starter Packs
Reposted by Ibrahim Alabdulmohsin
neuripsconf.bsky.social
The NeurIPS Call for Papers is now live. Abstracts are due May 11th AoE, with full papers due May 15th AoE. neurips.cc/Conferences/...

Please read about key changes to Dataset and Benchmarks submissions this year in our blog post: blog.neurips.cc/2025/03/10/n...
NeurIPS 2025 Call for Papers
Submit at: https://openreview.net/group?id=NeurIPS.cc/2025/Conference
neurips.cc
ibomohsin.bsky.social
Good, but how many recursion rounds do I need? The optimal number of recursion rounds depends on the model size and training compute budget. Smaller models benefit more from RINS. Also, RINS helps more with long-training durations.
ibomohsin.bsky.social
Besides, we also introduce *stochastic* RINS where we select the number of recursion rounds from a binomial distribution. This *improves* performance in SigLIP (despite also *saving* training flops). But in LM, there is a tradeoff between flexibility and maximum performance gain.
ibomohsin.bsky.social
Question: what if we use infinite compute? Will the gap vanish? We did scaling analysis and found that RINS improves both the asymptotic performance limit (so the gap actually increases, not vanishes) and improves convergence speed (scaling exponent).
ibomohsin.bsky.social
Our inspiration came from the study of self-similarity in language. If patterns are shared across scales, could scale-invariant decoding serve as a good inductive bias for processing language? It turns out that it does!
ibomohsin.bsky.social
To repeat, we train RINS on less data to match the same compute flops, which is why this is a stronger result than “sample efficiency”, and one should not just expect it to work. E.g. it does NOT help in image classification but RINS works in language and multimodal. Why? (3/n)🤔
ibomohsin.bsky.social
RINS is trivial to implement. After you pick your favorite model & fix your training budget: (1) partition the model into 2 equally-sized blocks, (2) apply recursion on the first and train for the same amount of compute you had planned -> meaning with *fewer* examples! That’s it!
ibomohsin.bsky.social
Recursion is trending (e.g. MobileLLM). But recursion adds compute / example so to show that it helps, one must match training flops; otherwise we could’ve just trained the baseline longer. With this, RINS beats +60 other recursive methods. (2/n)
ibomohsin.bsky.social
🔥Excited to introduce RINS - a technique that boosts model performance by recursively applying early layers during inference without increasing model size or training compute flops! Not only does it significantly improve LMs, but also multimodal systems like SigLIP.
(1/N)
Reposted by Ibrahim Alabdulmohsin
coreyryung.bsky.social
Pushing live production code cooked up by some young coders over a week of sleepness nights in place of a legacy system that is fundamental to the operation of the US government is against every programming best practice.
joshtpm.bsky.social
New Exclusive building on Wired's reporting: Musk operatives have already pushed live to production extensive code changes to the Treasury Department payment system which makes 95% of fed govs payments. talkingpointsmemo.com/edblog/musk-...
Musk Cronies Dive Into Treasury Dept Payments Code Base
Overnight Wired reported that contrary to published reports that DOGE operatives at...
talkingpointsmemo.com
ibomohsin.bsky.social
If you are interested in developing large-scale, multimodal datasets & benchmarks, and advancing AI through data-centric research, check out this great opportunity. Our team is hiring!
boards.greenhouse.io/deepmind/job...
Research Scientist, Zurich
Zurich, Switzerland
boards.greenhouse.io
ibomohsin.bsky.social
Have you wondered why next-token prediction can be such a powerful training objective? Come visit our poster to talk about language and fractals and how to predict downstream performance in LLMs better.

Poster #3105, Fri 13 Dec 4:30-7:30pm
x.com/ibomohsin/st...

See you there!
x.com
x.com
ibomohsin.bsky.social
Language interface is truly powerful! In LocCa, we show how simple image-captioning pretraining tasks improve localization without specialized vocabulary, while preserving holistic performance → SoTA on RefCOCO!

Poster #3602, Thu 12 Dec 4:30-7:30pm
arxiv.org/abs/2403.19596
LocCa: Visual Pretraining with Location-aware Captioners
Image captioning has been shown as an effective pretraining method similar to contrastive pretraining. However, the incorporation of location-aware information into visual pretraining remains an area ...
arxiv.org
ibomohsin.bsky.social
1st, we present recipes for evaluating and improving cultural diversity in contrastive models, with practical, actionable insights.

Poster #3810, Wed 11 Dec 11am-2pm (2/4)
x.com/ibomohsin/st...
x.com
x.com
ibomohsin.bsky.social
Attending #NeurIPS2024? If you're interested in multimodal systems, building inclusive & culturally aware models, and how fractals relate to LLMs, we've 3 posters for you. I look forward to presenting them on behalf of our GDM team @ Zurich & collaborators. Details below (1/4)
Reposted by Ibrahim Alabdulmohsin
andreaspsteiner.bsky.social
🚀🚀PaliGemma 2 is our updated and improved PaliGemma release using the Gemma 2 models and providing new pre-trained checkpoints for the full cross product of {224px,448px,896px} resolutions and {3B,10B,28B} model sizes.

1/7