Ignacy Stepka
ignacyy.bsky.social
Ignacy Stepka
@ignacyy.bsky.social
PhD student @ CMU MLD | Robustness, interpretability, time-series | https://ignacystepka.com
There are a few with good vibes and (somewhat) specialty coffee. Personally I like KLVN (near Bakery Square), Arriviste (Shadyside), Redhawk (Oakland). They're not super fancy, but way better than the well-known chains!
October 23, 2025 at 6:55 PM
📅 Tuesday 5:45 pm - 8:00 pm in Exhibit Hall poster no. 437

My colleague Łukasz Sztukiewicz will present our joint work (with @inverse-hessian.bsky.social) on the relationship between saliency maps and fairness as part of the Undergraduate and Master’s Consortium.

📄 Paper: arxiv.org/abs/2503.00234
Investigating the Relationship Between Debiasing and Artifact Removal using Saliency Maps
The widespread adoption of machine learning systems has raised critical concerns about fairness and bias, making mitigating harmful biases essential for AI development. In this paper, we investigate t...
arxiv.org
August 3, 2025 at 9:52 PM
📅 Monday 8:00 am - 12:00 pm in Room 700

Presenting our work on mitigating persistent client dropout in decentralized federated learning as part of the FedKDD workshop.

🌐 Project website: ignacystepka.com/projects/fed...
📄 Paper: openreview.net/pdf/576de662...
August 3, 2025 at 9:52 PM
📅 Tuesday 5:30 - 8 pm (poster no. 141) and Friday 8:55 - 9:15 (Room 801 A, talk)

I’ll be giving a talk and presenting a poster on robust counterfactual explanations.

🌐 Project website: ignacystepka.com/projects/bet...
📄 Paper: arxiv.org/abs/2408.04842
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world a...
arxiv.org
August 3, 2025 at 9:52 PM
Explore more:

📄 paper: arxiv.org/abs/2408.04842

👨‍💻 code: github.com/istepka/beta...

🌐 project page: ignacystepka.com/projects/bet...

👏 Big thanks to my co-authors Jerzy Stefanowski and Mateusz Lango!

#KDD2025 #TrustworthyAI #XAI 7/7🧵
Counterfactual Explanations with Probabilistic Guarantees on their Robustness to Model Change
Counterfactual explanations (CFEs) guide users on how to adjust inputs to machine learning models to achieve desired outputs. While existing research primarily addresses static scenarios, real-world a...
arxiv.org
May 12, 2025 at 12:51 PM
📊 Results: Across 6 datasets, BetaRCE consistently achieved target robustness levels while preserving explanation quality and maintaining a competitive robustness-cost trade-off. 6/7🧵
May 12, 2025 at 12:49 PM
You control both confidence level (α) and robustness threshold (δ), giving statistical guarantees that your explanation will survive changes! For formal proofs on optimal SAM sampling methods and the full theoretical foundation, check out our paper! 5/7🧵
May 12, 2025 at 12:49 PM
⚙️ Under the hood: BetaRCE explores a "Space of Admissible Models" (SAM) - representing expected/foreseeable changes to your model. Using Bayesian statistics, we efficiently estimate the probability that explanations remain valid across these changes. 4/7🧵
May 12, 2025 at 12:48 PM
✅ Our solution: BetaRCE - offers probabilistic guarantees for robustness to model change. It works with ANY model class, is post-hoc, and can enhance your current counterfactual methods. Plus, it allows you to control the robustness-cost trade-off. 3/7🧵
May 12, 2025 at 12:48 PM
❌ This happens constantly in real-world AI systems. Current explanation methods don't address this well - they're limited to specific models, require extensive tuning, or lack guarantees about explanation robustness. 2/7🧵
May 12, 2025 at 12:48 PM