📍Paris 🔗 https://davidpicard.github.io/
📅 Mon, Feb 9th
🕓 16:00-17:00 CET
🔗 Registration: https://bit.ly/4rxevLX
ℹ️ More info: https://bit.ly/47Qclim
📅 Mon, Feb 9th
🕓 16:00-17:00 CET
🔗 Registration: https://bit.ly/4rxevLX
ℹ️ More info: https://bit.ly/47Qclim
Led by Ti Wang & w/ Xiaohang Yu #FMPose3D is SOTA on human & animal 3D benchmarks, & will be integrated into @deeplabcut.bsky.social ⬇️
📝 arxiv.org/abs/2602.05755
➡️ xiu-cs.github.io/FMPose3D/
Led by Ti Wang & w/ Xiaohang Yu #FMPose3D is SOTA on human & animal 3D benchmarks, & will be integrated into @deeplabcut.bsky.social ⬇️
📝 arxiv.org/abs/2602.05755
➡️ xiu-cs.github.io/FMPose3D/
I think this is not going to age well. Here is a prediction I feel confident about: in 10 years, most big title video game will rely on generative world models.
I think this is not going to age well. Here is a prediction I feel confident about: in 10 years, most big title video game will rely on generative world models.
(with uin8 overflow artifacts, will be fixed for the next iteration)
(with uin8 overflow artifacts, will be fixed for the next iteration)
doi.org/10.1016/j.jp...
doi.org/10.1016/j.jp...
Details at www.ens-lyon.fr/LIP/images/P...
Details at www.ens-lyon.fr/LIP/images/P...
Details at www.ens-lyon.fr/LIP/images/P...
1/3
1/3
complexityzoo.net/Petting_Zoo
complexityzoo.net/Petting_Zoo
Simple but very effective idea building on top of JiT: because you're predicting x directly, you can add perceptual losses on top of flow matching. In the paper, they use a "DINO perceptual loss", and I'm going to argue...
Simple but very effective idea building on top of JiT: because you're predicting x directly, you can add perceptual losses on top of flow matching. In the paper, they use a "DINO perceptual loss", and I'm going to argue...
Simple, but seemingly effective idea. Just randomly masking your diffusion supervision seems to lead to less overfitting (of course?). Not to be confused with masked diffusion, this is simply during training.
Simple, but seemingly effective idea. Just randomly masking your diffusion supervision seems to lead to less overfitting (of course?). Not to be confused with masked diffusion, this is simply during training.