Reposted by Ko Nishino
💻We've released the code for our #CVPR2025 paper MAtCha!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
🍵MAtCha reconstructs sharp, accurate and scalable meshes of both foreground AND background from just a few unposed images (eg 3 to 10 images)...
...While also working with dense-view datasets (hundreds of images)!
by Ko Nishino
Multistable Shape from Shading Emerges from Patch Diffusion #NeurIPS2024 Spotlight
X. Nicole Han, T. Zickler and K. Nishino (Harvard+Kyoto)
Diffusion-based SFS lets you sample multistable shape perception!
Nicole at poster on Th 12/12 11am East A-C 1308
vision.ist.i.kyoto-u.ac.jp/research/mss...
X. Nicole Han, T. Zickler and K. Nishino (Harvard+Kyoto)
Diffusion-based SFS lets you sample multistable shape perception!
Nicole at poster on Th 12/12 11am East A-C 1308
vision.ist.i.kyoto-u.ac.jp/research/mss...
by Ko Nishino
PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars
Shota Sasaki, Jane Wu, Ko Nishino
Human avatar with movement- (not pose-)dependent clothing as 3D GS simulated with PBD attached to SMPL, all recovered from multiview video.
vision.ist.i.kyoto-u.ac.jp/research/pbdyg/
Shota Sasaki, Jane Wu, Ko Nishino
Human avatar with movement- (not pose-)dependent clothing as 3D GS simulated with PBD attached to SMPL, all recovered from multiview video.
vision.ist.i.kyoto-u.ac.jp/research/pbdyg/
by Ko Nishino
HeatFormer: A Neural Optimizer for Multiview Human Mesh Recovery
Yuto Matsubara and Ko Nishino (Kyoto University)
Occlusion-aware, view-flexible multiview human shape and pose recovery as learned optimization.
vision.ist.i.kyoto-u.ac.jp/research/hea...
Yuto Matsubara and Ko Nishino (Kyoto University)
Occlusion-aware, view-flexible multiview human shape and pose recovery as learned optimization.
vision.ist.i.kyoto-u.ac.jp/research/hea...
by Ko Nishino
Correspondences of the Third Kind: Camera Pose Estimation from Object Reflection (ECCV24 Oral)
Correspondences in the reflections let us disambiguate camera poses. No need of overlapping background; camera pose and 3D just from the shiny object surface.
vision.ist.i.kyoto-u.ac.jp/research/3rd...
Correspondences in the reflections let us disambiguate camera poses. No need of overlapping background; camera pose and 3D just from the shiny object surface.
vision.ist.i.kyoto-u.ac.jp/research/3rd...