Matteo Dunnhofer
@mdunnhofer.bsky.social
680 followers 240 following 26 posts
MSCA Postdoctoral Fellow at University of Udine 🇮🇹 and York University 🇨🇦 - interested in computer vision 👁️🤖 https://matteo-dunnhofer.github.io
Posts Media Videos Starter Packs
Pinned
mdunnhofer.bsky.social
Is Tracking really more challenging in First Person Egocentric Vision?

Our new #ICCV2025 paper follows up our IJCV 2023 study (bit.ly/4nVRJw9).

We further investigate the causes of performance drops in object and segmentation tracking under egocentric FPV settings.

🧵 (1/7)
mdunnhofer.bsky.social
Our #ICCV2025 paper will be presented as an Highlight ✨
mdunnhofer.bsky.social
Is Tracking really more challenging in First Person Egocentric Vision?

Our new #ICCV2025 paper follows up our IJCV 2023 study (bit.ly/4nVRJw9).

We further investigate the causes of performance drops in object and segmentation tracking under egocentric FPV settings.

🧵 (1/7)
mdunnhofer.bsky.social
Joint work with @zairamanigrasso.bsky.social and Christian Micheloni

Funded by PRIN 2022 PNRR, MSCA Actions

(7/7)
mdunnhofer.bsky.social
These FPV-specific challenges include:
- Frequent object disappearances
- Continuous camera motion altering object appearance
- Object distractors
- Wide field-of-view distortions near frame edges

(5/7)
mdunnhofer.bsky.social
- Trackers learn viewpoint biases and perform best on the viewpoint used during training.
- FPV tracking presents its specific challenges.

(4/7)
mdunnhofer.bsky.social
Key takeaways from our study:

- FPV is challenging for state-of-the-art generalistic trackers.
- Tracking objects in human-object interaction videos is difficult across both first- and third-person viewpoints.

(3/7)
mdunnhofer.bsky.social
We specifically examined whether these drops are due to FPV itself or to the complexity of human-object interaction scenarios.

To do this, we designed VISTA, a benchmark using synchronized first and third-person recordings of the same activities.

(2/7)
mdunnhofer.bsky.social
Is Tracking really more challenging in First Person Egocentric Vision?

Our new #ICCV2025 paper follows up our IJCV 2023 study (bit.ly/4nVRJw9).

We further investigate the causes of performance drops in object and segmentation tracking under egocentric FPV settings.

🧵 (1/7)
mdunnhofer.bsky.social
In the past weeks we have been around NETI at @utaustin.bsky.social, @vssmtg.bsky.social 2025, and the CVR-CIAN conf. @yorkuniversity.bsky.social, to discuss early findings on modeling object motion in the macaque visual system by deep neural networks. Details to appear soon! @kohitij.bsky.social
mdunnhofer.bsky.social
The teams behind the workshops on Computer Vision for Winter Sports at @wacvconference.bsky.social and on Computer Vision in Sports at @cvprconference.bsky.social have joined forces in the organisation of a special issue in CVIU on topics related to computer vision applications in sports.

1/2
mdunnhofer.bsky.social
Honored to be on the list this year!
cvprconference.bsky.social
Behind every great conference is a team of dedicated reviewers. Congratulations to this year’s #CVPR2025 Outstanding Reviewers!

cvpr.thecvf.com/Conferences/...
mdunnhofer.bsky.social
We are now live with the 3d Workshop on Computer Vision for Winter Sports @wacvconference.bsky.social #WACV2025.

Make sure to attend if you are around!
mdunnhofer.bsky.social
The #SkiTB Visual Tracking Challenge at #WACV2025 is open for submissions!

The goal is to track a skier in a video capturing his/her full performance across multiple video cameras, and it is based on our recently released SkiTB dataset.

🎥 youtu.be/Aos5iKrYM5o

1/3
[WACV 2024] Tracking Skiers from the Top to the Bottom - qualitative examples
YouTube video by Matteo Dunnhofer
youtu.be
mdunnhofer.bsky.social
This paper contributes to our projects PRIN 2022 EXTRA EYE and Project PRIN 2022 PNRR TEAM funded by European Union-NextGenerationEU.

6/6
mdunnhofer.bsky.social
This work was led by Moritz Nottebaum (stop by his poster!) at the Machine Learning and Perception Lab of the University of Udine

5/6
mdunnhofer.bsky.social
LowFormer achieves significant speedups in image throughput and latency on various hardware platforms, while maintaining or surpassing the accuracy of current state-of-the-art models across image recognition, object detection, and semantic segmentation.

4/6
mdunnhofer.bsky.social
We used insights from such an analysis to enhance the hardware efficiency of backbones at the macro level, and introduced a slimmed-down version of multi-head self-attention to improve efficiency in the micro design.

3/6
mdunnhofer.bsky.social
We empirically found out that MACs alone do not accurately account for inference speed.

2/6
mdunnhofer.bsky.social
Is attendance open to YorkU researchers (e.g. postdocs)? Would like a lot to learn from your teaching style!
mdunnhofer.bsky.social
Did the same a few weeks ago in Toronto. I think this is the best pizza flavor you can get in Canada 😂
mdunnhofer.bsky.social
The top-performing teams will be invited to present their solution at the 3rd Workshop on Computer Vision for Winter Sports at #WACV2025!

📄 sites.google.com/unitn.it/cv4...

3/3
CV4WS@WACV2025
UPDATE (11/19/24): DEADLINE EXTENDED
sites.google.com
mdunnhofer.bsky.social
The challenge platform is hosted on CodaLab and you can find all the submission instructions there.

Deadline for submission is January 31st 2025.

🏆 codalab.lisn.upsaclay.fr/competitions...

2/3
CodaLab - Competition
codalab.lisn.upsaclay.fr
mdunnhofer.bsky.social
The #SkiTB Visual Tracking Challenge at #WACV2025 is open for submissions!

The goal is to track a skier in a video capturing his/her full performance across multiple video cameras, and it is based on our recently released SkiTB dataset.

🎥 youtu.be/Aos5iKrYM5o

1/3
[WACV 2024] Tracking Skiers from the Top to the Bottom - qualitative examples
YouTube video by Matteo Dunnhofer
youtu.be
mdunnhofer.bsky.social
Would be happy to be added as well :) I am working on visual tracking, currently at YorkU ;)