Quentin Gallouédec
@qgallouedec.hf.co
910 followers 170 following 17 posts
PhD - Research @hf.co 🤗 TRL maintainer
Posts Media Videos Starter Packs
qgallouedec.hf.co
It started as a modest project to offer a free, open-source alternative to MuJoCo environments, and today, panda-gym is downloaded over 100k times, and cited in over 100 papers. 🦾
qgallouedec.hf.co
🚀 TRL 0.14 – Featuring GRPO! 🚀

TRL 0.14 brings *GRPO*, the RL algorithm behind 🐳 DeekSeek-R1 .

⚡ Blazing fast generation with vLLM integration.
📉 Optimized training with DeepSpeed ZeRO 1/2/3.
qgallouedec.hf.co
Last moments of closed-source AI 🪦 :
Hugging Face is openly reproducing the pipeline of 🐳 DeepSeek-R1. Open data, open training. open models, open collaboration.

🫵 Let's go!
github.com/huggingface/...
GitHub - huggingface/open-r1: Fully open reproduction of DeepSeek-R1
Fully open reproduction of DeepSeek-R1. Contribute to huggingface/open-r1 development by creating an account on GitHub.
github.com
qgallouedec.hf.co
The algorithm behind DeepSeek's R1 model (aka GRPO) now lives in TRL main branch! Go and test it!
qgallouedec.hf.co
[Stonks] TRL is a Python library for training language models.

It has seen impressive growth this year. Lots of new features, an improved codebase, and this has translated into increased usage. You can count on us to do even more in 2025.
qgallouedec.hf.co
🎅 Santa Claus has delivered the ultimate guide to understand OOM error (link in comment)
qgallouedec.hf.co
Top 1 Python dev today. Third time since september 🫨
qgallouedec.hf.co
🚨 TRL 0.13 is out! 🤗

Featuring a Process-supervised Reward Models (PRM) Trainer 🏋️

PRMs empower LLMs to "think before answering"—a key feature behind OpenAI's o1 launch just two weeks ago. 🚀
Reposted by Quentin Gallouédec
lewtun.bsky.social
We outperform Llama 70B with Llama 3B on hard math by scaling test-time compute 🔥

How? By combining step-wise reward models with tree search algorithms :)

We're open sourcing the full recipe and sharing a detailed blog post 👇
qgallouedec.hf.co
The number of TRL models on the 🤗 Hub has risen x60 this year! 📈
How about doing the same next year?
Reposted by Quentin Gallouédec
benburtenshaw.bsky.social
We took those TRL notebooks from last week and made a page from them. So if you're upskilling on finetuning or aligning LLMs, and want examples from the community (like Maxime Labonne Philipp Schmid Sergio Paniego Blanco), check it out!

bsky.app/profile/benb...

>> huggingface.co/docs/trl/mai...
qgallouedec.hf.co
Join us at Hugging Face as an intern if you want to contribute to amazing open-source projects, and develop LLM's best finetuning library, aka TRL.

🧑‍💻 Full remote
🤯 Exciting subjects
🌍 Anywhere in the world
🤸🏻 Flexible working hours

Link to apply in comment 👇
qgallouedec.hf.co
I'd love to! We have a lot of room for improvement here!
benburtenshaw.bsky.social
These tutorials provide a comprehensive but concise roadmap through TRL across the main fine-tuning and alignment classes.

🤔 Let me know if you would like a dedicated course on TRL basics.
Reposted by Quentin Gallouédec
benburtenshaw.bsky.social
These tutorials provide a comprehensive but concise roadmap through TRL across the main fine-tuning and alignment classes.

🤔 Let me know if you would like a dedicated course on TRL basics.
Reposted by Quentin Gallouédec
thomwolf.bsky.social
It's Sunday morning so taking a minute for a nerdy thread (on math, tokenizers and LLMs) of the work of our intern Garreth

By adding a few lines of code to the base Llama 3 tokenizer, he got a free boost in arithmetic performance 😮

[thread]
qgallouedec.hf.co
Finetune SmolLM2 with TRL!
benburtenshaw.bsky.social
Here's a notebook where I do SFT SmolLM2 on the synthetic dataset: colab.research.google.com/drive/1lioed...

thanks @philschmid.bsky.social for the finetuning code
thanks @huggingface.bsky.social for the smol model
thanks @qgallouedec.bsky.social and friends for TRL
Google Colab
colab.research.google.com
Reposted by Quentin Gallouédec
jsulz.com
jsulz @jsulz.com · Nov 20
When XetHub joined Hugging Face, we brainstormed how to share our tech with the community.

The magic? Versioning chunks, not files, giving rise to:

🧠 Smarter storage
⏩ Faster uploads
🚀 Efficient downloads

Curious? Read the blog and let us know how it could help your workflows!
From Files to Chunks: Improving HF Storage Efficiency
We’re on a journey to advance and democratize artificial intelligence through open source and open science.
huggingface.co