Skander Moalla
banner
skandermoalla.bsky.social
Skander Moalla
@skandermoalla.bsky.social
PhD @Caglar Gulcehre Lab for AI Research (CLAIRE) @EPFL. Deep Reinforcement Learning, RLHF, foundation models.
ML Research Template (https://github.com/CLAIRE-Labo/python-ml-research-template)
Reposted by Skander Moalla
The next generation of open LLMs should be inclusive, compliant, and multilingual by design. That’s why we @icepfl.bsky.social @ethz.ch @cscsch.bsky.social ) built Apertus.
EPFL, ETH Zurich & CSCS just released Apertus, Switzerland’s first fully open-source large language model.
Trained on 15T tokens in 1,000+ languages, it’s built for transparency, responsibility & the public good.

Read more: actu.epfl.ch/news/apertus...
September 3, 2025 at 9:26 AM
I’m really proud of this work! It’s been an amazing collaboration with @simonmatrenok.bsky.social and @caglarai.bsky.social

📰 Paper: arxiv.org/abs/2507.08068
Hidden gems and open questions in the 30+ page appendix💎
🧑‍💻 Code: github.com/CLAIRE-Labo/...
🌐 Blog: claire-labo.github.io/quantile-rewar
Quantile Reward Policy Optimization: Alignment with Pointwise Regression and Exact Partition Functions
Aligning large language models with pointwise absolute rewards has so far required online, on-policy algorithms such as PPO and GRPO. In contrast, simpler methods that can leverage offline or off-poli...
arxiv.org
July 15, 2025 at 6:45 PM
What do these optimal policies look like? 👀
We show equivalence of a family of transformations allowing us to qualitatively interpret the quantile reward optimal as a Best-of-N policy 🎯
Empirically, each transformation brings different dynamics, and it's an open question to compare all of them! 🕵️
July 15, 2025 at 6:45 PM
QRPO is a framework. You can shape the optimal policy! 🎛️
We derive a framework around QRPO for using transformations on top of the quantile reward.
Each transformation reshapes the reward distribution and affects the properties of the optimal policy, while having a tractable partition function.
July 15, 2025 at 6:45 PM
And we show that for relatively high beta, with good data, the probabilities increase as predicted 💯
July 15, 2025 at 6:45 PM
For QRPO, this is not a mystery anymore; we know exactly where the probabilities should move, and we explain how it's normal for them to decrease when the regularization (beta) is very low.
This is simply because the target policy is much further away from the training support 🎯
July 15, 2025 at 6:45 PM
Is QRPO still subject to the "chosen probabilities decreasing" problem?
Our understanding of the KL-regularized closed-form solution gives insights into the "DPO chosen probabilities decreasing" problem! 🤔
July 15, 2025 at 6:45 PM
💬 The reward model we use has been trained to be robust to length bias, and we see that this is preserved in QRPO and REBEL, which use rewards.
But when compressed to preferences for DPO and SimPO, it leads to the typical length bias trend, despite the reduction in mean length.
July 15, 2025 at 6:45 PM
🥇 QRPO achieves top performance in chat and coding compared to DPO, REBEL, and SimPO, each capturing a different way to learn from the reward signal (preference, reward difference, length normalization).
July 15, 2025 at 6:45 PM
Obviously, nothing comes for free, but we give you a great deal! 🤝

* QRPO does not need many reference rewards to estimate quantiles: for high-quality offline datasets, 1-3 are enough!

* And you can scale this number for off-policy data generated from the reference model! 📈
July 15, 2025 at 6:45 PM
3️⃣ We can transform the reward distribution to make it known. It's uniform for reward quantiles! 🔑

🚀 The result: Quantile Reward Policy Optimization!

QRPO transforms rewards to quantile rewards for which we derive Z, and can then fit the closed-form optimal RL solution with a simple regression! 📉
July 15, 2025 at 6:45 PM
1️⃣ The “infinite sum over all possible LLM generations” argument is a myth. We rewrite the partition function Z in terms of rewards, revealing that Z is given by the moment generating function (MGF) of the reward distribution!

2️⃣ Knowing the reward distribution => knowing the MGF => knowing Z 🔐
July 15, 2025 at 6:45 PM
We tackle the infamous “... partition function is known to be intractable...” problem 🧐
This is the problem that limits DPO-like methods to pairwise data. We solve it thanks to 3 insights! 💡
July 15, 2025 at 6:45 PM
🚀 Big time! We can finally do simple LLM RL fine-tuning with rewards and leverage offline/off-policy data!

❌ You want rewards, but GRPO only works online?
❌ You want offline, but DPO is limited to preferences?
✅ QRPO can do both!

🧵Here's how we do it:
July 15, 2025 at 6:45 PM
Reposted by Skander Moalla
⚡️🧠 Excited to share our recent work on long-context efficiency! We propose a new layer called RAT—fast and lightweight like RNNs, yet powerful like Attention. 🐭✨ This is the joint effort with Anunay Yadav, @razvan-pascanu.bsky.social @caglarai.bsky.social !
July 12, 2025 at 9:59 AM
Reposted by Skander Moalla
Excited to share our latest work on EvoTune, a novel method integrating LLM-guided evolutionary search and reinforcement learning to accelerate the discovery of algorithms! 1/12🧵
April 26, 2025 at 4:56 PM
Reposted by Skander Moalla
Anastasia @koloskova.bsky.social recently won the European @ellis.eu PhD award, for her amazing work on AI and optimization.

She will be joining University of Zurich as a professor this summer, and hiring PhD students and postdocs. You should apply to her group!

Her website: koloskova.github.io
Anastasia Koloskova
Anastasia Koloskova, PhD student in Machine Learning at EPFL.
koloskova.github.io
March 8, 2025 at 1:53 PM
A dream come true! I presented "No Representation, No Trust" on my favorite RL podcast, TalkRL!
Make sure to check it out to learn why training with PPO for too long makes your agent collapse!
E63: NeurIPS 2024 - Posters and Hallways 1

Jiaheng Hu of UTexas on Unsupervised Skill Discovery for HRL
@skandermoalla.bsky.social of EPFL: Representation and Trust in PPO
Adil Zouitine of IRT Saint Exupery/Hugging Face : Time-Constrained Robust MDPs
March 3, 2025 at 9:36 PM
Reposted by Skander Moalla
Excited to share that the first paper of my PhD has been accepted for publication at the ISPRS Geospatial Week 2025! This dataset paper introduces a globally representative, high-resolution (10m) benchmark dataset for Above Ground Biomass estimation.
January 27, 2025 at 1:21 PM
Reposted by Skander Moalla
For my first post on Bluesky .. I'll start by announcing our 2025 edition of EEML which will be in Sarajevo :) ! I'm really excited about it and hope to see many of you there. Please follow the website (and Bluesky account) for more details which are coming soon ..
Hello Bluesky! 🦋

This will be the official account of the Eastern European Machine Learning (EEML) community.

Follow us for news regarding our summer schools, workshops, education/community initiatives, and more!
December 15, 2024 at 6:39 PM
Also, check out our ML project template—it’s a game-changer!🚀🚀
@caglarai.bsky.social
🧑‍💻 github.com/CLAIRE-Labo/...
December 10, 2024 at 7:39 PM
Ever been puzzled by your PPO agent collapsing out of nowhere? 📈🤯📉 Come check out our poster tomorrow!
Wed 11 Dec 11 am - 2 pm PST
West Ballroom A-D #6403
@caglarai.bsky.social @andreamiele.bsky.social @razvan-pascanu.bsky.social
December 10, 2024 at 6:33 PM