Michael J. Black
@michael-j-black.bsky.social
3K followers 98 following 53 posts
Director, Max Planck Institute for Intelligent Systems; Chief Scientist Meshcapade; Speaker, Cyber Valley. Building 3D humans. https://ps.is.mpg.de/person/black https://meshcapade.com/ https://scholar.google.com/citations?user=6NjbexEAAAAJ&hl=en&oi=ao
Posts Media Videos Starter Packs
michael-j-black.bsky.social
InteractVLM: 3D Interaction Reasoning from 2D Foundational Models
interactvlm.is.tue.mpg.de

InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
interdyn.is.tue.mpg.de

Reconstructing Animals and the Wild
raw.is.tue.mpg.de

Workshop paper:

Generative Zoo
genzoo.is.tue.mpg.de
michael-j-black.bsky.social
ChatGarment: Garment Estimation, Generation and Editing via Large Language Models
chatgarment.github.io

ChatHuman: Chatting about 3D Humans with Tools
chathuman.github.io

PICO: Reconstructing 3D People In Contact with Objects
pico.is.tue.mpg.de
michael-j-black.bsky.social
Here are all the CVPR projects that I’m part of in one thread.

Conference papers:

PromptHMR: Promptable Human Mesh Recovery
yufu-wang.github.io/phmr-page/

DiffLocks: Generating 3D Hair from a Single Image using Diffusion Models
radualexandru.github.io/difflocks/
Reposted by Michael J. Black
meshcapade.com
Final drop in our #CVPR2025 video series: PICO 🤝📦

Watch how we reconstruct realistic human-object interaction from just one image—with dense contact and mesh fitting!

👋 Visit our booth 1333 at CVPR.

🔗 Paper link in the thread.

#3DBody #AI #SMPL
michael-j-black.bsky.social
My dream has been to take a photo of a person and extract the 2D sewing pattern of their clothing and then turn it into a 3D garment. ChatGarment does exactly this plus it lets you edit the garment, or create a completely new one, using text prompts. chatgarment.github.io At CVPR2025!
michael-j-black.bsky.social
To estimate 3D humans from video in world coordinates we add side information to prompt the process. Prompts include bounding boxes, face detections, segmentation masks, and text descriptions. It's currently the most accurate video-based method out there. See us at CVPR2025.
meshcapade.com
Next in our #CVPR2025 lineup: PromptHMR 👀💫

Turn any video into precise 3D people: occlusions, crowds, world coords solved. State of the art accuracy. 💯

Artists, devs, researchers get instant digital bodies.

Visit our booth 1333 @ CVPR to learn more about PromptHMR!

#AI #3D #DigitalHumans
michael-j-black.bsky.social
Check out DiffLocks, appearing at #CVPR2025. From a single image, we estimate about 100K hair strands that you can then physically simulate. We use a dataset of 40K synthetic hair images with ground truth strands. It's all available for research purposes.
meshcapade.com
Meet DiffLocks, one of our CVPR papers: 3D hair from one image or video, instant mesh, AND public 40,000 3D hairstyle dataset!

Learn more about it at our booth 1333 @ #CVPR2025!

Paper link + author names in thread.

#3DBody #SMPL #MachineLearning #HairTech #GenerativeAI @cvprconference.bsky.social
Reposted by Michael J. Black
meshcapade.com
Yay, @cvprconference.bsky.social, we’re in! 🎉#CVPR2025

5 papers accepted, 5 going live 🚀

Catch PromptHMR, DiffLocks, ChatHuman, ChatGarment & PICO at Booth 1333, June 11–15.

Details about the papers in the thread! 👇

#3DBody #SMPL #GenerativeAI #MachineLearning
Reposted by Michael J. Black
meshcapade.com
🔥 Heading to the #ICRA2025?

Join us on May 23, 2pm (Room 316) for MoCapade: markerless motion capture from any video!

Powered by PromptHMR (CVPR 2025). No suits, no markers—just motion. 🕺💻

#AI #3DMotion #SMPL #Robotics #ICRA #Meshcapade
michael-j-black.bsky.social
Improved foot-ground contact coming soon to MoCapade3.0.
alerender.bsky.social
@meshcapade.com incorporará una mejora en su precisión con FootLock. Acá hice un test comparándolo con la captura de mi traje .
#mocapsuit #mocapvideo #3danimation
Reposted by Michael J. Black
meshcapade.com
🎬 Join Meshcapade at FMX!

See how to turn video or text into ready-to-use 3D motion—no suits or markers needed.

Workshop: May 8, 10:00 AM.

Perfect for animation, VFX & game dev!

📍 Info: fmx.de/en/program/p...

#FMX2025 #3DMotion
Reposted by Michael J. Black
ericzzj.bsky.social
St4RTrack: Simultaneous 4D Reconstruction and Tracking in the World

Haiwen Feng, @junyi42.bsky.social, @qianqianwang.bsky.social, Yufei Ye, Pengcheng Yu, @michael-j-black.bsky.social, Trevor Darrell, @akanazawa.bsky.social

DUSt3R-like framework

arxiv.org/abs/2504.13152
Reposted by Michael J. Black
meshcapade.com
Missed us at GDC?
Watch Part 1 of our talk here 👉 youtu.be/0jCTiQMutow

🚶 Motion capture with MoCapade
🎮 Import directly into Unreal Engine
🎭 Retarget to any character
👀 Bonus: sneak peek at 3D hair & realtime single-cam mocap

🕴️ No suits. 📍 No markers. 🤳 Just one camera.
Meshcapade: From zero to game-ready assets in seconds (GDC 2025 Presentation)
YouTube video by Meshcapade
youtu.be
michael-j-black.bsky.social
Will there be an AI tariff? TL;DR: The societies that will "win" the AI race will not be those that develop the technology first. It will be those that are best able to manage the long-term social disruption AI will cause.
perceiving-systems.blog/en/news/what...
on Medium
medium.com/@black_51980...
The AI tariff?
perceiving-systems.blog
Reposted by Michael J. Black
meshcapade.com
One shot. One town. Endless dancing. 🕺💃🎶

This one-take Unreal animation uses Meshcapade to bring every character to life. No mocap suits, just seamless motion from start to finish.

Stylized, cinematic, and full of vibes; exactly how digital animation should feel 📹✨

#3DAnimation #MarkerlessMocap
michael-j-black.bsky.social
The arXiv paper for PRIMAL is now online. It's a data-driven, interactive avatar, that can be controlled by varied commands, runs in a game engine, and is responsive to perturbations (without physics simulation). arxiv.org/abs/2503.17544
Reposted by Michael J. Black
meshcapade.com
👨‍🎤 Ever seen an Interactive Generative avatar running inside @unrealengine.bsky.social?

Check out our latest work, PRIMAL, in collaboration with Max Planck Institute for Intelligent Systems & Stanford University - live demo at @officialgdc.bsky.social! 🎮

www.youtube.com/watch?v=-Gcp...
PRIMAL: Physically Reactive and Interactive Motor Model for Avatar Learning
YouTube video by Yan Zhang
www.youtube.com
Reposted by Michael J. Black
arturgrigorev.bsky.social
🎉🎉🎉 Happy to announce that the code for our paper Gaussian Garments is now public!

Link: github.com/eth-ait/Gaus...

Gaussian Garments uses a combination of 3D meshes and Gaussian splatting to reconstruct photorealistic simulation-ready digital garments from multi-view videos. 🧵
michael-j-black.bsky.social
The CameraHMR video is now on Youtube. This is currently the most accurate single-image method for estimating 3D human shape and pose. The paper will be presented at 3DV. The code and data are all on-line here: camerahmr.is.tue.mpg.de
youtu.be/v3WzpjXpknc
CameraHMR: Aligning People with Perspective (3DV 2025)
YouTube video by Michael Black
youtu.be
Reposted by Michael J. Black
meshcapade.com
🎮 Motion Generation in Unreal, made SMPL.

Experience real-time reactive behavior—characters adapt instantly to your input. With controllable generative 3D motion, every move is unique. Real-time motion blending keeps it smooth.

🚀 See it at Booth C1821!

#3DAnimation #MotionGeneration #GameDev #GDC
michael-j-black.bsky.social
And it just keeps coming from @meshcapade.com -- next up, real-time markerless motion capture from a single camera. Check it out at #GDC2025. Yes, the founders will be there to dance for you (as in this video) but it's more fun to try it yourself!
meshcapade.com
🎥✨ One camera. Live motion. Zero hassle.

Stream real-time motion to 3D characters in Unreal—no suits, no markers, just seamless animation. See it LIVE at Booth C1821!

#GDC2025 #MotionCapture #Realtime
Reposted by Michael J. Black
meshcapade.com
Faces, Expressions, Hair—Brought to Life with Meshcapade.

From facial animation to 3D hair, creating digital humans has never been easier.

See it at #GDC2025! Visit Booth C1821 and experience motion, detail, expression.

#UnrealEngine #3DAnimation #MotionCapture #FacialAnimation #genAI #Meshcapade
michael-j-black.bsky.social
I'll be at #GDC2025 so if you are there and want to meet, you can probably find me at the @meshcapade.com booth.
meshcapade.com
We're excited to announce that we are going to the #GDC2025!

See our demos & Unreal Engine plugin in action—live!
We’re bringing next-gen markerless motion capture to game dev.
Come experience it for yourself—can't wait to see you all at GDC 2025!

#GameDevelopment #UnrealEngine #MotionCapture #GDC
Reposted by Michael J. Black
meshcapade.com
Meshcapade was just featured again in DER SPIEGEL and their Spiegel-Shortcut podcast! 🎙️✨

The podcast highlights the diversity of ideas shaping the future - something we’re very proud to part of!

▶️ Watch on Youtube: www.youtube.com/watch?v=gN0X...
KI-Innovation: Hat Deutschland Antworten auf ChatGPT, DeepSeek & Co.? – Shortcut | DER SPIEGEL
YouTube video by DER SPIEGEL
www.youtube.com