Gunnar König
@gunnark.bsky.social
500 followers 200 following 12 posts
PostDoc @ Uni Tübingen explainable AI, causality gunnarkoenig.com
Posts Media Videos Starter Packs
Pinned
gunnark.bsky.social
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
gunnark.bsky.social
In short: Many XAI papers are based on goals such as "transparency". But what does that mean? We argue that XAI methods should be motivated by concrete goals (e.g., explaining how to change an unfavorable prediction) instead of vague concepts (e.g., interpretability).

Section 3, Misconception 1
arxiv.org
Reposted by Gunnar König
jessicahullman.bsky.social
Looking forward to talking about our work on the value of explanation for decision-making at this workshop
Reposted by Gunnar König
Reposted by Gunnar König
bayesianboy.bsky.social
expressing appreciation for this scientific diagram
Reposted by Gunnar König
ulrikeluxburg.bsky.social
Time to figure out which provable guarantees one can(not) give on XAI! Workshop "Theory of Explainable Machine
Learning", Dec 2 in Copenhagen as part of the Ellis
Unconference/EurIPS. Submission deadline: Oct 15.

sites.google.com/view/theory-...
eurips.cc/ellis/
Reposted by Gunnar König
ulrikeluxburg.bsky.social
I am hiring PhD students and/or Postdocs, to work on the theory of explainable machine learning. Please apply through Ellis or IMPRS, deadlines end october/mid november. In particular: Women, where are you? Our community needs you!!!

imprs.is.mpg.de/application
ellis.eu/news/ellis-p...
gunnark.bsky.social
Not that I know of. But the method is relatively easy to implement. Please reach out if you would like to use it. I'm happy to assist!
gunnark.bsky.social
Sounds interesting? Have a look at our paper!

Joint work with Eric Günther and @ulrikeluxburg.bsky.social.
arxiv.org
gunnark.bsky.social
In our recent AISTATS paper, we propose DIP, a novel mathematical decomposition of feature attribution scores that cleanly separates individual feature contributions and the contributions of interactions and dependencies.
gunnark.bsky.social
Dependencies are not only a neglected cooperative force but also complicate the definition and quantification of feature interactions. In particular, the contributions of interactions and dependencies may cancel each other out and must be disentangled to be fully revealed.
gunnark.bsky.social
For example, suppose we predict kidney function (Y) from creatinine (C) and muscle mass (M), and that C reflects Y but also M, which is not linked to Y. Here, M becomes useful once combined with C, as it allows us to subtract irrelevant variation from C. In other words, C&M cooperate via dependence!
gunnark.bsky.social
Determining whether variables are relevant due to cooperation is crucial, as variables that cooperate must be considered jointly to understand their relevance. Notably, features cooperate not only through interactions but also through statistical dependencies, which existing methods neglect.
gunnark.bsky.social
In many XAI applications, it is crucial to determine whether features contribute individually or only when combined. However, existing methods fail to reveal cooperations since they entangle individual contributions with those made via interactions and dependencies. We show how to disentangle them!
Reposted by Gunnar König
slds-lmu.bsky.social
Feature importance measures can clarify or mislead. PFI, LOCO, and SAGE each answer a different question.
Understand how to pick the right tool and avoid spurious conclusions: mcml.ai/news/2025-03...
@fionaewald.bsky.social @ludwig-bothmann.bsky.social @giuseppe88.bsky.social @gunnark.bsky.social
mcml.ai
Reposted by Gunnar König
ulrikeluxburg.bsky.social
Finally made it to bluesky as well ...
Reposted by Gunnar König
timvanerven.nl
And the video of Gunnar's talk is up on YouTube in case you missed it: youtu.be/7MrMjabTbuM

@gunnark.bsky.social
gunnark.bsky.social
I recall you had an iPad -- why did you switch?
Reposted by Gunnar König
fedeadolfi.bsky.social
A starter pack of people working on interpretability / explainability of all kinds, using theoretical and/or empirical approaches.

Reply or DM if you want to be added, and help me reach others!

go.bsky.app/DZv6TSS
Reposted by Gunnar König
dziadzio.bsky.social
Here's a fledgling starter pack for the AI community in Tübingen. Let me know if you'd like to be added!

go.bsky.app/NFbVzrA
Tübingen AI
Join the conversation
go.bsky.app