Robin Hesse
@robinhesse.bsky.social
55 followers 120 following 3 posts
PhD student in explainable AI for computer vision @visinf.bsky.social @tuda.bsky.social - Prev. intern AWS and @maxplanck.de
Posts Media Videos Starter Packs
Reposted by Robin Hesse
swetamahajan.bsky.social
🚨 Call for Questions! 🚨

We are inviting the community and the stakeholders to submit questions, which will be discussed with our experts at the workshop! 🎤💡

👉 Submit your questions: forms.gle/8cYb4Ce3dGHi...

Workshop: excv-workshop.github.io

@iccv.bsky.social
#ICCV2025 #eXCV
Reposted by Robin Hesse
visinf.bsky.social
Some impressions from our VISINF summer retreat at Lizumer Hütte in the Tirol Alps — including a hike up Geier Mountain and new research ideas at 2,857 m! 🇦🇹🏔️
Reposted by Robin Hesse
christophreich.bsky.social
Check out our blog post about SceneDINO 🦖
For more details, check out our project page, 🤗 demo, and the hashtag #ICCV2025 paper 🚀

🌍Project page: visinf.github.io/scenedino/
🤗Demo: visinf.github.io/scenedino/
📄Paper: arxiv.org/abs/2507.06230
@jev-aleks.bsky.social
Reposted by Robin Hesse
swetamahajan.bsky.social
🚨Deadline Extension Alert!

Our Non-proceedings track is open till August 15th for the eXCV workshop at ICCV.

Our nectar track accepts published papers, as is.

More info at: excv-workshop.github.io

@iccv.bsky.social #ICCV2025
Reposted by Robin Hesse
swetamahajan.bsky.social
Introducing the speakers for the eXCV workshop at ICCV, Hawaii. Get ready for many stimulating and insightful talks and discussions.

Our Non-proceedings track is still open!

Paper submission deadline: July 18, 2025

More info at: excv-workshop.github.io

@iccv.bsky.social #ICCV2025
robinhesse.bsky.social
Got a strong XAI paper rejected from ICCV? Submit it to our ICCV eXCV Workshop today—we welcome high-quality work!
🗓️ Submissions open until June 26 AoE.
📄 Got accepted to ICCV? Congrats! Consider our non-proceedings track.
#ICCV2025 @iccv.bsky.social
Reposted by Robin Hesse
visinf.bsky.social
We had a great time at #CVPR2025 in Nashville!
Reposted by Robin Hesse
qbouniot.bsky.social
I will be presenting our paper on measuring non-linearity of deep neural networks @cvprconference.bsky.social
!

🔗 Project page: qbouniot.github.io/affscore_web...

Come join me on Sunday 15th June from 10:30 to 12:30, ExHall D Poster #402 #CVPR2025
robinhesse.bsky.social
Submissions for the proceedings track (regular+position papers) of our second workshop on explainable computer vision at @iccv.bsky.social in Hawaii are open until June 20, 2025.
sukrutrao.bsky.social
Join us in taking stock of the state of the field of explainability in computer vision, at our Workshop on Explainable Computer Vision: Quo Vadis? at #ICCV2025!

@iccv.bsky.social
Call for papers at the eXCV workshop at ICCV 2025.
Reposted by Robin Hesse
visinf.bsky.social
We are presenting 3 papers at #CVPR2025!
robinhesse.bsky.social
I'm looking forward to giving a talk at the MIV Workshop tomorrow at #CVPR2025!

We show how to improve the interpretability of a CNN by disentangling a polysemantic channel into multiple monosemantic ones - without changing the function of the CNN.
visinf.bsky.social
Disentangling Polysemantic Channels in Convolutional Neural Networks

by @robinhesse.bsky.social, Jonas Fischer, @simoneschaub.bsky.social, and @stefanroth.bsky.social

Paper: arxiv.org/abs/2504.12939

Talk: Thursday 11:40 AM, Grand ballroom C1
Poster: Thursday, 12:30 PM, ExHall D, Poster 31-60
Reposted by Robin Hesse
sukrutrao.bsky.social
We are thrilled to welcome an incredible lineup of invited speakers to the 4th Explainable AI for Computer Vision (XAI4CV) Workshop, held as part of #CVPR2025 — which kicks off next week, from Wednesday, June 11th to Sunday, June 15th in Nashville, TN!
Invited Speakers for the XAI4CV workshop at CVPR 2025 — Mihaela van der Schaar, Tsui-Wei (Lily) Weng, Klaus-Robert Müller, Junfeng He, Chinasa T Okolo
Reposted by Robin Hesse
visinf.bsky.social
📢 #CVPR2025 Highlight: Scene-Centric Unsupervised Panoptic Segmentation 🔥

We present CUPS, the first unsupervised panoptic segmentation method trained directly on scene-centric imagery.
Using self-supervised features, depth & motion, we achieve SotA results!

🌎 visinf.github.io/cups
Reposted by Robin Hesse
visinf.bsky.social
Want to learn about how model design choices affect the attribution quality of vision models? Visit our #NeurIPS2024 poster on Friday afternoon (East Exhibition Hall A-C #2910)!

Paper: arxiv.org/abs/2407.11910
Code: github.com/visinf/idsds