http://dimadamen.github.io
HD-EPIC: A Highly-Detailed Egocentric Video Dataset
hd-epic.github.io
arxiv.org/abs/2502.04144
New collected videos
263 annotations/min: recipe, nutrition, actions, sounds, 3D object movement &fixture associations, masks.
26K VQA benchmark to challenge current VLMs
1/N
pics w @simonmcs.bsky.social and Sadaf Alam
pics w @simonmcs.bsky.social and Sadaf Alam
Abstract: Jan 23, 2026 AoE
Paper: Jan 28, 2026 AoE
Location: Seoul, South Korea 🇰🇷
icml.cc/Conferences/...
ICML'26 (abs): 71 days.
ICML'26 (paper): 76 days.
ECCV'26: 112 days.
But how is this skill learned, and can we model its progression?
We present CleverBirds, accepted #NeurIPS2025, a large-scale benchmark for visual knowledge tracing.
📄 arxiv.org/abs/2511.08512
1/5
More than bad reviews, I am mostly frustrated by the unkind messages reviewers give... If you don't agree with how we did something, that's your right, but stating it's unreasonable or ridiculous is not within your right!
More than bad reviews, I am mostly frustrated by the unkind messages reviewers give... If you don't agree with how we did something, that's your right, but stating it's unreasonable or ridiculous is not within your right!
Please help us share this post among students you know with an interest in Machine Learning and Biodiversity! 🤖🪲🌱
Please help us share this post among students you know with an interest in Machine Learning and Biodiversity! 🤖🪲🌱
By Gianluca Monaci, @weinzaepfelp.bsky.social and myself.
@naverlabseurope.bsky.social
arxiv.org/abs/2507.01667
🧵1/5
By Gianluca Monaci, @weinzaepfelp.bsky.social and myself.
@naverlabseurope.bsky.social
Apologies if that's incorrect but this is what I was told.
Apologies if that's incorrect but this is what I was told.
Form: support.conferences.computer.org/cvpr/help-desk
Form: support.conferences.computer.org/cvpr/help-desk
We match Co-Tracker3 on RoboTAP and outperform it on EgoPoints and RGB-S only through correspondences
Code and Models are out
Work led by Rhodri Guerrier w Adam W Harley
3/3
We match Co-Tracker3 on RoboTAP and outperform it on EgoPoints and RGB-S only through correspondences
Code and Models are out
Work led by Rhodri Guerrier w Adam W Harley
3/3
We use *no* temporal knowledge (no windows) - only pairwise matching!
rhodriguerrier.github.io/PointSt3R/
2/3
We use *no* temporal knowledge (no windows) - only pairwise matching!
rhodriguerrier.github.io/PointSt3R/
2/3
PointSt3R: Point Tracking through 3D Grounded Correspondence
arxiv.org/abs/2510.26443
Can point tracking be re-formulated as pairwise frame correspondence solely?
We fine-tuning MASt3R with dynamic correspondences and a visibility loss and achieve competitive point tracking results
1/3
PointSt3R: Point Tracking through 3D Grounded Correspondence
arxiv.org/abs/2510.26443
Can point tracking be re-formulated as pairwise frame correspondence solely?
We fine-tuning MASt3R with dynamic correspondences and a visibility loss and achieve competitive point tracking results
1/3
R. Guerrier, @adamharley.bsky.social, @dimadamen.bsky.social
Bristol/Meta
rhodriguerrier.github.io/PointSt3R/
R. Guerrier, @adamharley.bsky.social, @dimadamen.bsky.social
Bristol/Meta
rhodriguerrier.github.io/PointSt3R/
Chandan Yeshwanth and Yueh-Cheng Liu have added pano captures for 956 ScanNet++ scenes, fully aligned with the 3D meshes, DSLR, and iPhone data - multiple panos per scene
Check it out:
Docs kaldir.vc.in.tum.de/scannetpp/do...
Code github.com/scannetpp/sc...
Chandan Yeshwanth and Yueh-Cheng Liu have added pano captures for 956 ScanNet++ scenes, fully aligned with the 3D meshes, DSLR, and iPhone data - multiple panos per scene
Check it out:
Docs kaldir.vc.in.tum.de/scannetpp/do...
Code github.com/scannetpp/sc...
@bristoluni.bsky.social to give a #MaVi for a seminar: From Pixels to 3D Motion
We enjoyed your visit! Thanks for staying through for all 1-1s with the researchers.
@bristoluni.bsky.social to give a #MaVi for a seminar: From Pixels to 3D Motion
We enjoyed your visit! Thanks for staying through for all 1-1s with the researchers.
Read the full article: ellis.eu/news/ellis-s...
Read the full article: ellis.eu/news/ellis-s...
Kinaema: A recurrent sequence model for memory and pose in motion
arxiv.org/abs/2510.20261
By @mbsariyildiz.bsky.social, @weinzaepfelp.bsky.social, G. Bono, G. Monaci and myself
@naverlabseurope.bsky.social
1/9
Kinaema: A recurrent sequence model for memory and pose in motion
arxiv.org/abs/2510.20261
By @mbsariyildiz.bsky.social, @weinzaepfelp.bsky.social, G. Bono, G. Monaci and myself
@naverlabseurope.bsky.social
1/9
A EuroHPC Success Story | Clear Vision for Self-Driving Cars
www.eurohpc-ju.europa.eu/eurohpc-succ...
A EuroHPC Success Story | Clear Vision for Self-Driving Cars
www.eurohpc-ju.europa.eu/eurohpc-succ...
Congrats to Dr Kevin and first advisor Michael Wray on a career achievement. Coincidently, Kevin also received the Outstanding Reviewer Award @neuripsconf.bsky.social #NeuriPS2025 on the day of his viva!
#ProudAdvisor
2/2
Congrats to Dr Kevin and first advisor Michael Wray on a career achievement. Coincidently, Kevin also received the Outstanding Reviewer Award @neuripsconf.bsky.social #NeuriPS2025 on the day of his viva!
#ProudAdvisor
2/2
Great presentation incl. #CVPR2025 and #ICCV2025 papers from Francois's group, sharing insights and future directions
1/2
Great presentation incl. #CVPR2025 and #ICCV2025 papers from Francois's group, sharing insights and future directions
1/2
Her advice: When your research goals are much harder than anticipated, take a step back to see the big picture.
#WomenInELLIS
Her advice: When your research goals are much harder than anticipated, take a step back to see the big picture.
#WomenInELLIS
@naverlabseurope.bsky.social
This is a collaboration with Sorbonne University/ISIR (Nicolas Thome)
You can apply online:
careers.werecruit.io/en/naver-lab...
@naverlabseurope.bsky.social
This is a collaboration with Sorbonne University/ISIR (Nicolas Thome)
You can apply online:
careers.werecruit.io/en/naver-lab...
To catch up #ICCV2025, the slides for both talks are available:
* Creating a CV Dataset in 2025
* Video Understanding Out of the frame
dimadamen.github.io/talks.html
To catch up #ICCV2025, the slides for both talks are available:
* Creating a CV Dataset in 2025
* Video Understanding Out of the frame
dimadamen.github.io/talks.html