Carlo Sferrazza
@carlosferrazza.bsky.social
82 followers 60 following 33 posts
Postdoc at Berkeley AI Research. PhD from ETH Zurich. Robotics, Artificial Intelligence, Humanoids, Tactile Sensing. https://sferrazza.cc
Posts Media Videos Starter Packs
Pinned
carlosferrazza.bsky.social
Ever wondered what robots 🤖 could achieve if they could not just see – but also feel and hear?

Introducing FuSe: a recipe for finetuning large vision-language-action (VLA) models with heterogeneous sensory data, such as vision, touch, sound, and more.

Details in the thread 👇
carlosferrazza.bsky.social
FastTD3 is open-source, and compatible with most sim-to-real robotics frameworks, e.g., MuJoCo Playground and Isaac Lab. All the advances in scaling off-policy RL are now readily available to the robotics community 🤖
carlosferrazza.bsky.social
A very cool thing: FastTD3 achieves state-of-the-art performance on most HumanoidBench tasks, even superior to model-based algorithms. All it takes: 128 parallel environments and 1-3 hours of training 🤯
carlosferrazza.bsky.social
Off-policy methods have pushed RL sample efficiency, but robotics still leans on parallel on-policy RL (PPO) for wall-time gains. FastTD3 gets the best of both worlds!
carlosferrazza.bsky.social
We just released FastTD3: a simple, fast, off-policy RL algorithm to train humanoid policies that transfer seamlessly from simulation to the real world.

younggyo.me/fast_td3
carlosferrazza.bsky.social
Heading to @ieeeras.bsky.social RoboSoft today! I'll be giving a short Rising Star talk Thu at 2:30pm: "Towards Multi-sensory, Tactile-Enabled Generalist Robot Learning"

Excited for my first in-person RoboSoft after the 2020 edition went virtual mid-pandemic.

Reach out if you'd like to chat!
carlosferrazza.bsky.social
And co-organizers @sukhijab.bsky.social, @amyxlu.bsky.social, Lenart Treven, Parnian Kassraie, Andrew Wagenmaker, Olivier Bachem, @kjamieson.bsky.social, @arkrause.bsky.social, Pieter Abbeel
carlosferrazza.bsky.social
With amazing speakers Sergey Levine, Dorsa Sadigh, @djfoster.bsky.social, @ji-won-park.bsky.social, Ben Van Roy, Rishabh Agarwal, @alisongopnik.bsky.social, Masatoshi Uehara
carlosferrazza.bsky.social
What is the place of exploration in today's AI landscape and in which settings can exploration algorithms address current open challenges?

Join us to discuss this at our exciting workshop at @icmlconf.bsky.social 2025: EXAIT!

exait-workshop.github.io

#ICML2025
carlosferrazza.bsky.social
Very easy installation, it can even run on a single Python notebook: colab.research.google.com/github/googl...

Check out @mujoco.bsky.social’s thread above for all the details.

Can't wait to see the robotics community build on this pipeline and keep pushing the field forward!
Google Colab
colab.research.google.com
carlosferrazza.bsky.social
It was really amazing to work on this and see the whole project come together.

Sim-to-real is often an iterative process – Playground makes it seamless.

An open-source ecosystem is essential for integrating new features – check out Madrona-MJX for distillation-free visual RL!
carlosferrazza.bsky.social
Big news for open-source robot learning! We are very excited to announce MuJoCo Playground.

The Playground is a reproducible sim-to-real pipeline that leverages MuJoCo ecosystem and GPU acceleration to learn robot locomotion and manipulation in minutes.

playground.mujoco.org
mujoco.bsky.social
Introducing playground.mujoco.org
Combining MuJoCo’s rich and thriving ecosystem, massively parallel GPU-accelerated simulation, and real-world results across a diverse range of robot platforms: quadrupeds, humanoids, dexterous hands, and arms.
Get started today: pip install playground
MuJoCo Playground
An open-source framework for GPU-accelerated robot learning and sim-to-real transfer
playground.mujoco.org
carlosferrazza.bsky.social
We open source the code and the models, as well as the dataset, which comprises 27k (!) action-labeled robot trajectories with visual, inertial, tactile, and auditory observations.

Code: github.com/fuse-model/F...
Models and dataset: huggingface.co/oier-mees/FuSe
GitHub - fuse-model/FuSe
Contribute to fuse-model/FuSe development by creating an account on GitHub.
github.com
carlosferrazza.bsky.social
We find that the same general recipe is applicable to generalist policies with diverse architectures, including a large 3B VLA with a PaliGemma vision-language-model backbone.
carlosferrazza.bsky.social
FuSe policies reason jointly over vision, touch, and sound, enabling tasks such as multimodal disambiguation, generation of object descriptions upon interaction, and compositional cross-modal prompting (e.g., “press the button with the same color as the soft object”).
carlosferrazza.bsky.social
Pretrained generalist robot policies finetuned on multimodal data consistently outperform baselines finetuned only on vision data. This is particularly evident in tasks with partial visual observability, such as grabbing objects from a shopping bag.
carlosferrazza.bsky.social
We use language instructions to ground all sensing modalities by introducing two auxiliary losses. In fact, we find that naively finetuning on a small-scale multimodal dataset results in the VLA over-relying on vision, ignoring much sparser tactile and auditory signals.
carlosferrazza.bsky.social
Ever wondered what robots 🤖 could achieve if they could not just see – but also feel and hear?

Introducing FuSe: a recipe for finetuning large vision-language-action (VLA) models with heterogeneous sensory data, such as vision, touch, sound, and more.

Details in the thread 👇
Reposted by Carlo Sferrazza
sukhijab.bsky.social
Excited to share MaxInfoRL, a family of powerful off-policy RL algorithms! The core focus of this work was to develop simple, flexible, and scalable methods for principled exploration. Check out the thread below to see how MaxInfoRL meets these criteria while also achieving SOTA empirical results.
carlosferrazza.bsky.social
🚨 New reinforcement learning algorithms 🚨

Excited to announce MaxInfoRL, a class of model-free RL algorithms that solves complex continuous control tasks (including vision-based!) by steering exploration towards informative transitions.

Details in the thread 👇
carlosferrazza.bsky.social
We are also excited to share both Jax and Pytorch implementations, making it simple for RL researchers to integrate MaxInfoRL into their training pipelines.

Jax (built on jaxrl): github.com/sukhijab/max...
Pytorch (based on @araffin.bsky.social‘s SB3): github.com/sukhijab/max...
GitHub - sukhijab/maxinforl_jax
Contribute to sukhijab/maxinforl_jax development by creating an account on GitHub.
github.com
carlosferrazza.bsky.social
By combining MaxInfoRL with DrQv2 and DrM, this achieves state-of-the-art model-free performance on hard visual control tasks such as DMControl humanoid and dog tasks, improving both sample efficiency and steady-state performance.
carlosferrazza.bsky.social
MaxInfoRL is a simple, flexible, and scalable add-on to most RL advancements. We combine it with various algorithms, such as SAC, REDQ, DrQv2, DrM, and more – consistently showing improved performance over the respective backbones.