Snehal Jauhri
@snehaljauhri.bsky.social
710 followers 120 following 11 posts
ML for Robotics | PhD candidate @ TU Darmstadt with Georgia Chalvatzaki | Research Intern @AllenAI | Building AI for the Home Robot | https://pearl-lab.com/people/snehal-jauhri
Posts Media Videos Starter Packs
snehaljauhri.bsky.social
We can then use our high-quality dataset to train or fine-tune a VLM that takes in the activity/task text prompt as input and predicts bimanual affordance masks (for a left and right robot hand)

4/5
snehaljauhri.bsky.social
We extract bimanual affordance masks from egocentric RGB video datasets using video-based hand inpainting and object reconstruction.

No manual labeling is required. The narrations from egocentric datasets also provide free-form text supervision! (Eg. "pour milk into bowl")

3/5
snehaljauhri.bsky.social
The Problem:
Most affordance detection methods just segment object parts & do not predict actionable regions for robots!

Our solution?
Use egocentric bimanual human videos to extract precise affordance regions considering object relationships, context, & hand coordination!

2/5
snehaljauhri.bsky.social
📢 PSA for the robotics community:
Stop labeling affordances or distilling them from VLMs.
Extract affordances from bimanual human videos instead!

Excited to share 2HandedAfforder: Learning Precise Actionable Bimanual Affordances from Human Videos, accepted at #ICCV2025! 🎉

🧵1/5
snehaljauhri.bsky.social
Thank you to all the speakers & attendees for making the EgoAct workshop a great success!

Congratulations to the winners of the Best Paper Awards: EgoDex & DexWild!

The full recording is available at: youtu.be/64yLApbBZ7I

Some highlights:
snehaljauhri.bsky.social
Call for Contributions:
We’re inviting contributions in the form of:
📝 Full papers OR
📝 4-page extended abstracts
🗓️ Submission Deadline: April 30, 2025
🏆 Best Paper Award, sponsored by Meta!
snehaljauhri.bsky.social
Core workshop topics include:
🥽 Egocentric interfaces for robot learning
🧠 High-level action & scene understanding
🤝 Human-to-robot transfer
🧱 Foundation models from human activity datasets
🛠️ Egocentric world models for high-level planning & low-level manipulation
snehaljauhri.bsky.social
📢 Excited to announce EgoAct 🥽🤖: the 1st Workshop on Egocentric Perception and Action for Robot Learning at #RSS2025 in LA!

We’re bringing together researchers exploring how egocentric perception can drive next-gen robot learning!

🔗 Full info: egoact.github.io/rss2025

@roboticsscisys.bsky.social
snehaljauhri.bsky.social
I'm working on Robot learning and perception : )