Oussama Zekri
@ozekri.bsky.social
61 followers 120 following 20 posts
ENS Saclay maths dpt + UW Research Intern. Website : https://oussamazekri.fr Blog : https://logb-research.github.io/
Posts Media Videos Starter Packs
Pinned
ozekri.bsky.social
🚀 Did you know you can use the in-context learning abilities of an LLM to estimate the transition probabilities of a Markov chains?

The results are pretty exciting ! 😄
Reposted by Oussama Zekri
qberthet.bsky.social
🚨 New paper on regression and classification!

Adding to the discussion on using least-squares or cross-entropy, regression or classification formulations of supervised problems!

A thread on how to bridge these problems:
ozekri.bsky.social
You mean, we don’t stop at the frontier of the convex set but just a bit further ?

Wow, does this trick have a name?
ozekri.bsky.social
Looks nice!! Will stop by your notebooks
ozekri.bsky.social
Working with him these past months has been both fun and inspiring. He’s an incredibly talented researcher! 🚀

If you haven’t heard of him, check out his work : he’s one of the pioneers of operator learning and pushing this field to new heights!
Nicolas Boullé
About me
nboulle.github.io
ozekri.bsky.social
Thanks for reading !

❤️ Work done during my 3-months internship at Imperial College!

A huge thanks to Nicolas Boullé (nboulle.github.io) for letting me work on a topic that interested me a lot during the internship.
Nicolas Boullé
About me
nboulle.github.io
ozekri.bsky.social
We fine-tuned a discrete diffusion model to respond to user prompts. In just 7k iterations (GPU poverty is real, haha), it outperforms the vanilla model ~75% of the time! 🚀
ozekri.bsky.social
Building on this, we can correct the gradient direction to better **follow the flow**, using the implicit function theorem (cf @mblondel.bsky.social et al., arxiv.org/abs/2105.15183 )✨

The cool part? We only need to invert a linear system, whose inverse is known in closed form! 🔥
ozekri.bsky.social
Inspired by Implicit Diffusion (@pierremarion.bsky.social @akorba.bsky.social @qberthet.bsky.social🤓, arxiv.org/abs/2402.05468), we sample using a specific CTMC, reaching the limiting distribution in an infinite time horizon. This effectively implements a gradient flow w.r.t. a Wasserstein metric!🔥
ozekri.bsky.social
SEPO, like most policy optimization algorithms, alternates between sampling and optimization. But what if sampling itself was seen as an optimization procedure in distribution space? 🚀
ozekri.bsky.social
If you have a discrete diffusion model (naturally designed for discrete data, e.g. language or DNA sequence modeling), you can finetune it with non-differentiable reward functions! 🎯

For example, this enables RLHF for discrete diffusion models, making alignment more flexible and powerful. ✅
ozekri.bsky.social
The main gradient takes the form of a weighted log concrete score, echoing DeepSeek’s unified paradigm with the weighted log policy!🔥

From this, we can reconstruct any policy gradient method for discrete diffusion models (e.g. PPO, GRPO etc...). 🚀
ozekri.bsky.social
The main bottleneck of Energy-Based Models is computing the normalizing constant Z.

Instead, recent discrete diffusion models skip Z by learning ratios of probabilities. This forms the concrete score, which a neural network models efficiently!⚡

The challenge? Using this score network as a policy.
ozekri.bsky.social
🚀 Policy gradient methods like DeepSeek’s GRPO are great for finetuning LLMs via RLHF.

But what happens when we swap autoregressive generation for discrete diffusion, a rising architecture promising faster & more controllable LLMs?

Introducing SEPO !

📑 arxiv.org/pdf/2502.01384

🧵👇
ozekri.bsky.social
Beautiful work!!
ambroiseodt.bsky.social
🚀Proud to share our work on the training dynamics in Transformers with Wassim Bouaziz & @viviencabannes.bsky.social @Inria @MetaAI

📝Easing Optimization Paths arxiv.org/pdf/2501.02362 (accepted @ICASSP 2025 🥳)

📝Clustering Heads 🔥https://arxiv.org/pdf/2410.24050

🖥️ github.com/facebookrese...

1/🧵
Reposted by Oussama Zekri
ambroiseodt.bsky.social
🚀Proud to share our work on the training dynamics in Transformers with Wassim Bouaziz & @viviencabannes.bsky.social @Inria @MetaAI

📝Easing Optimization Paths arxiv.org/pdf/2501.02362 (accepted @ICASSP 2025 🥳)

📝Clustering Heads 🔥https://arxiv.org/pdf/2410.24050

🖥️ github.com/facebookrese...

1/🧵
Reposted by Oussama Zekri
lebellig.bsky.social
For the French-speaking audience, S. Mallat's courses at the College de France on Data generation in AI by transport and denoising have just started. I highly recommend them, as I've learned a lot from the overall vision of his courses.

Recordings are also available: www.youtube.com/watch?v=5zFh...
Génération de données en IA par transport et débruitage (1) - Stéphane Mallat (2024-2025)
YouTube video by Mathématiques et informatique - Collège de France
www.youtube.com
Reposted by Oussama Zekri
arnauddoucet.bsky.social
Speculative sampling accelerates inference in LLMs by drafting future tokens which are verified in parallel. With @vdebortoli.bsky.social , A. Galashov & @arthurgretton.bsky.social , we extend this approach to (continuous-space) diffusion models: arxiv.org/abs/2501.05370
ozekri.bsky.social
i couldn’t have say it better myself !
Reposted by Oussama Zekri
konstmish.bsky.social
The idea that one needs to know a lot of advanced math to start doing research in ML seems so wrong to me. Instead of reading books for weeks and forgetting most of them a year later, I think it's much better to try do things, see what knowledge gaps prevent you from doing them, and only then read.
ozekri.bsky.social
This equivalence between LLMs and Markov chains seems useless, but it isn't! Among the contributions, the paper highlights bounds established thanks to this equivalence, and verifies the influence of bound terms on recents LLMs !

I invite you to take a look at the other contributions of the paper 🙂
ozekri.bsky.social
This number is huge, but **finite**! Working with markov chains in a finite state space really gives non-trivial mathematical insights (existence and uniqueness of a stationary distribution for example...).
Reposted by Oussama Zekri
ambroiseodt.bsky.social
🚨So, you want to predict your model's performance at test time?🚨

💡Our NeurIPS 2024 paper proposes 𝐌𝐚𝐍𝐨, a training-free and SOTA approach!

📑 arxiv.org/pdf/2405.18979
🖥️https://github.com/Renchunzi-Xie/MaNo

1/🧵(A surprise at the end!)
Reposted by Oussama Zekri
gabrielpeyre.bsky.social
I wrote a summary of the main ingredients of the neat proof by Hugo Lavenant that diffusion models do not generally define optimal transport. github.com/mathematical...