Full body model really excels in exo views and is worth using if one can get a get view of the upper body, and hands only work great given a good bounding box from projecting 3D exo keypoints into egocentric views.
I've made lots of improvements to the calibration code and ended up merging the full body estimator with the hands only. Also FINALLY got ego synced and working in the full thing.
From 8 -> 5 -> 4 exocentric cameras, all visualized with @rerundotio. I'm dropping the number of cameras used and collecting my own data to make sure I'm not overfitting to open-source datasets.
Still, I'm quite happy with how it's going so far. Currently, I have a reasonable set of datasets to validate, a performant baseline, and an annotation app to correct inaccurate predictions.
From here, the focus will be more on the egocentric side!
Really happy with how it looks so far, but this is far from ideal.
1. Not even close to real time, this 30-second 8-view sequence took nearly 5 minutes to process on my 5090 GPU 2. 8 views is WAY too many and unscalable, I'm convinced this can be done with far fewer (2 exo + 1 stereo ego)
3. Per View 2D keypoint estimation 4. Hand Pose Optimization
At the end of it all, I have a pipeline where you input synchronized videos and this outputs full tracked per-view 2D keypoints, bounding boxes, 3D keypoints, MANO joint angles + hand shape!
I want to emphasize that these are not the ground-truth values provided by the wonderful HOCap dataset, but rather from my pipeline that was written from the ground up!
For context, it consists of 4 parts
1. Exo/Ego camera estimation 2. Hand Shape Calibration
It's finally done, I've finished ripping out my full-body pipeline and replaced it with a hands-only version. Critical to make it work in a lot more scenarios! I've visualized the final predictions with @rerundotio!
The next step involves leveraging Rerun's recent updates, particularly the multisink support. Changes are saved directly to a file in .rrd format, easily extractable since the underlying representation is PyArrow. This can be converted to Pandas, Polars, or DuckDB.
Networks will occasionally make mistakes, so having the ability to correct them manually is crucial. This is a significant step towards robust and powerful hand tracking, which will provide excellent training data for robot dexterous manipulation.
The only input required is a zip file containing two or more multiview MP4 files. I handle everything else automatically. This application works with both egocentric (first-person) and exocentric (third-person) videos.
The combination of Rerun's callback system and Gradio integration enables a highly customizable and powerful labeling app. It supports multiple views, 2D and 3D, and maintains time synchronization!
If you're not labeling your own data, you're NGMI. I take this seriously, so I finished building the first version of my hand-tracking annotation app using rerun.io and gradio.app
The complexity of this is really starting to stack up, and I hope in the longer term to have the compute + data to build a fully end-to-end network! x.com/pablovelago...
Every off-the-shelf annotation solution I've tried doesn't provide nearly enough flexibility, so it was a no-brainer to build my own with rerun and gradio.