Bart Duisterhof
@bardienus.bsky.social
2.2K followers 690 following 28 posts
PhD Student @cmurobotics.bsky.social with @jeff-ichnowski.bsky.social || DUSt3R Research Intern @naverlabseurope || 4D Vision for Robot Manipulation 📷 He/Him - https://bart-ai.com
Posts Media Videos Starter Packs
Pinned
bardienus.bsky.social
Imagine if robots could fill in the blanks in cluttered scenes.

✨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out.
It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io
bardienus.bsky.social
RaySt3R was accepted to NeurIPS! Check out the HuggingFace demo for image to 3D in cluttered scenes huggingface.co/spaces/bartd...
bardienus.bsky.social
In "hearing the slide"👂 (led by @yuemin-mao.bsky.social ) we estimate *loss* of contact with a contact microphone, and use it to learn dynamic constraints.⚡ It allows moving multiple intricate objects🍷 efficiently, even objects that would otherwise be hard to grasp. fast-non-prehensile.github.io
bardienus.bsky.social
Big thanks to the awesome contributors to this project!👏 Jan Oberst, @bowenwen_me, @BirchfieldStan, @RamananDeva and @jeff_ichnowski. Also thanks to OctMAE author @s1wase, @nvidia for sponsoring compute 🖥️, and the scientists at @naverlabseurope for the inspiration! 🧗‍♂️
bardienus.bsky.social
We also study the impact of the confidence threshold on reconstruction quality. Our ablations suggest setting a higher confidence threshold improves accuracy, while limiting completeness and edge-bleeding. Users can tune the threshold for application-specific requirements 🎛️.
bardienus.bsky.social
We evaluate RaySt3R against the baselines on synthetic and real-world datasets. The results suggest RaySt3R achieves zero-shot generalization to the real world, and outperforms all baselines by up to 44% in 3D chamfer distance 🚀.
bardienus.bsky.social
We train RaySt3R by curating a new dataset, for a total of 12 million views 📷 with Objaverse and GSO objects. The ablations 🔍 suggest that more and more diverse data improves RaySt3R's performance. RaySt3R does not require GT meshes, paving the way for training on real-world data.
bardienus.bsky.social
💡 Our key insight is that 3D object shape completion can be recasted as a novel-view synthesis problem. RaySt3R takes a masked RGB-D image as input, and predicts depth maps and object masks for novel views. We query multiple views and merge the predictions into a consistent point cloud.
bardienus.bsky.social
We focus on multi-object 3D shape completion for robotics. Robots are commonly equipped with a RGB-D camera 📷, but their measurements are noisy and incomplete.

Using only DINOv2 features 🦖 as pretraining, we train a new model (RaySt3R) to produce accurate geometry.
bardienus.bsky.social
Imagine if robots could fill in the blanks in cluttered scenes.

✨ Enter RaySt3R: a single masked RGB-D image in, complete 3D out.
It infers depth, object masks, and confidence for novel views, and merges the predictions into a single point cloud. rayst3r.github.io
bardienus.bsky.social
Do you think Europe will take the opportunity? The Netherlands is even cutting research funds under the new administration... It feels like there are still significantly more opportunities in the US.
bardienus.bsky.social
Thanks Chris! This was a push with the entire dust3r team @naverlabseurope.bsky.social, congrats everyone!
Reposted by Bart Duisterhof
3dvconf.bsky.social
The Best Student Paper Award goes to MASt3R-SfM! #3DV2025
Reposted by Bart Duisterhof
uksang.bsky.social
🎉Excited to share that our paper was a finalist for best paper at #HRI2025! We introduce MOE-Hair, a soft robot system for hair care 💇🏻💆🏼 that uses mechanical compliance and visual force sensing for safe, comfortable interaction. Check our work: moehair.github.io @cmurobotics.bsky.social 🧵1/7
Reposted by Bart Duisterhof
ericzzj.bsky.social
MUSt3R: Multi-view Network for Stereo 3D Reconstruction

Yohann Cabon, Lucas Stoffl, Leonid Antsfeld, Gabriela Csurka, Boris Chidlovskii, Jerome Revaud, @vincentleroy.bsky.social

tl;dr: make DUSt3R symmetric and iterative+multi-layer memory mechanism->multi-view DUSt3R

arxiv.org/abs/2503.01661
bardienus.bsky.social
Great news, CMU's Center for Machine Learning and Health (CMLH) decided to fund another year of our research! If you're a PhD student at CMU, consider applying for the next iterations of the fellowship - the funding is generous and relatively unconstrained :)
bardienus.bsky.social
Is the book just as good/better than the show for "The 3 body problem"?
Reposted by Bart Duisterhof
cmurobotics.bsky.social
Watch Professor Jeff Ichnowski's RI seminar talk: "Learning for Dynamic Robot Manipulation of Deformable and Transparent Objects" 🦾🤖

@jeff-ichnowski.bsky.social closed out our Fall seminar series. Keep an eye out for the Spring schedule in the new year!

www.youtube.com/watch?v=DvvF...
RI Seminar : Jeffrey Ichnowski : Learning for Dynamic Robot Manipulation of Deformable...
YouTube video by CMU Robotics Institute
www.youtube.com
Reposted by Bart Duisterhof
nagababa.bsky.social
Intro Post
Hello World!
I'm a 2nd year Robotics PhD student at CMU, working on distributed dexterous manipulation, accessible soft robots and sensors, sample efficient robot learning, and causal inference.

Here are my cute robots:
PS: Videos are old and sped up. They move slower in real-world :3
Reposted by Bart Duisterhof
csprofkgd.bsky.social
My growing list of #computervision researchers on Bsky.

Missed you? Let me know.

go.bsky.app/M7HGC3Y
bardienus.bsky.social
For international students: renewing your visa asap might be a good idea.