Hubert Baniecki
hbaniecki.com
Hubert Baniecki
@hbaniecki.com
PhD student, University of Warsaw
hbaniecki.com
Explaining similarity in vision-language encoders with weighted Banzhaf interactions
Check out the paper on arXiv: arxiv.org/abs/2508.05430
Code to be released soon
👆4/4
September 25, 2025 at 4:43 PM
Moreover, we derive three evaluation metrics to facilitate future work in this direction. 𝐅𝐈𝐱𝐋𝐈𝐏 achieves state-of-the-art faithfulness performance across the popular insertion/deletion and pointing game benchmarks.
👇3/4
September 25, 2025 at 4:43 PM
We show that explaining vision–language interactions is essential to faithfully interpret models like OpenAI CLIP & Google SigLIP-2. 𝐅𝐈𝐱𝐋𝐈𝐏 is grounded in cooperative game theory, where we analyze its intriguing properties compared to prior art like Shapley values.
👇2/4
September 25, 2025 at 4:43 PM
🎉 Our paper has been accepted at #NeurIPS2025! @neuripsconf.bsky.social
We introduce faithful interaction explanations of CLIP models (FIxLIP), offering a unique perspective on interpreting image–text similarity predictions.
👇1/4
September 25, 2025 at 4:43 PM
𝗖𝗧𝗘 improves the accuracy and stability of explanation estimation with negligible computational overhead, often achieving an on-par error using 2–3× fewer samples, i.e. requiring 2–3× fewer model inferences (⌛ = 💰).

👇4/5
January 30, 2025 at 12:55 PM
𝗖𝗧𝗘 results in more accurate explanations of smaller variance as benchmarked with 4 popular methods (SHAP, SAGE, PDP, Expected Gradients) across 50 datasets and 2 model classes.

👇3/5
January 30, 2025 at 12:55 PM
🚀 Our paper proposing a new paradigm for more efficient estimation of machine learning explanations is accepted at #ICLR2025!

This is joint work with Giuseppe Casalicchio, Bernd Bischl & Przemyslaw Biecek, to be presented in Singapore 🇸🇬

👇1/5
January 30, 2025 at 12:55 PM