Ignacy Stepka
@ignacyy.bsky.social
PhD student @ CMU MLD | Robustness, interpretability, time-series | https://ignacystepka.com
This week I'm presenting some works at #KDD2025 in Toronto 🇨🇦
Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Here’s where you can find me:
Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Here’s where you can find me:
August 3, 2025 at 9:52 PM
This week I'm presenting some works at #KDD2025 in Toronto 🇨🇦
Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Here’s where you can find me:
Let’s connect if you’re interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Here’s where you can find me:
📣 New paper at #KDD2025 on robust counterfactual explanations!
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧵
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧵
May 12, 2025 at 12:47 PM
📣 New paper at #KDD2025 on robust counterfactual explanations!
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧵
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7🧵