Berk Ustun
@berkustun.bsky.social
2.6K followers 450 following 41 posts
Assistant Prof at UCSD. I work on safety, interpretability, and fairness in machine learning. www.berkustun.com
Posts Media Videos Starter Packs
Reposted by Berk Ustun
lorisdanto.bsky.social
Who teaches an undergraduate principles of programming languages class? Looking for some inspiration to teach one at UCSD
berkustun.bsky.social
Time for XAI for Code? 🙃
Reposted by Berk Ustun
lawlessopt.bsky.social
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Think credit applicants that can never get a loan approved, or young patients that can never get an organ transplant - no matter how sick they are!
Reposted by Berk Ustun
lawlessopt.bsky.social
Excited to be chatting about our new paper "Understanding Fixed Predictions via Confined Regions" (joint work with @berkustun.bsky.social, Lily Weng, and Madeleine Udell) at #ICML2025!

🕐 Wed 16 Jul 4:30 p.m. PDT — 7 p.m. PDT
📍East Exhibition Hall A-B #E-1104
🔗 arxiv.org/abs/2502.16380
Understanding Fixed Predictions via Confined Regions
Machine learning models can assign fixed predictions that preclude individuals from changing their outcome. Existing approaches to audit fixed predictions do so on a pointwise basis, which requires ac...
arxiv.org
Reposted by Berk Ustun
jessicahullman.bsky.social
ExplainableAI has long frustrated me by lacking a clear theory of what an explanation should do. Improve use of a model for what? How? Given a task what's max effect explanation could have? It's complicated bc most methods are functions of features & prediction but not true state being predicted 1/
Reposted by Berk Ustun
mariadearteaga.bsky.social
Having a lot of FOMO not being able to be in person at #FAccT2025 but enjoying the virtual transmission 💻. Tomorrow Jakob will be presenting our paper "Perils of Label Indeterminacy: A Case Study on Prediction of Neurological Recovery After Cardiac Arrest".
screenshot of title and authors (Jakob Schoeffer, Maria De-Arteaga, Jonathan Elmer)
berkustun.bsky.social
Explanations don't help us detect algorithmic discrimination. Even when users are trained. Even when we control their beliefs. Even under ideal conditions... 👇
jskirzynski.bsky.social
Right to explanation laws assume explanations help people detect algorithmic discrimination.

But is there any evidence for that?

In our latest work w/ David Danks @berkustun, we show explanations fail to help people, even under optimal conditions.

PDF shorturl.at/yaRua
berkustun.bsky.social
*wrapfig entered the document*
Reposted by Berk Ustun
p1sh.bsky.social
“Science is a smart, low cost investment. The costs of not investing in it are higher than the risk of doing so… talk to people about science.” - @kevinochsner.bsky.social makes his case to the field #sans2025
berkustun.bsky.social
I tried to be nice but then they said that saying please and thanks costs millions.
Reposted by Berk Ustun
friedler.net
Hey AI folks - stop using SHAP! It won't help you debug [1], won't catch discrimination [2], and makes no sense for feature importance [3].

Plus - as we show - it also won't give recourse.

In a paper at #ICLR we introduce feature responsiveness scores... 1/

arxiv.org/pdf/2410.22598
Left: a feature-highlighting explanation generated by SHAP that shows multiple important features, however these include features that can not be changed (e.g., age, number of dependents) and features that even if they were changed would not result in a different outcome (e.g., credit utilization).

Right: a feature-highlighting explanation generated by our responsiveness scores showing only features that can be changed and which have the potential to result in a better outcome for the individual (multiple credit lines and monthly income).
Reposted by Berk Ustun
haileyjoren.bsky.social
When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social
Reposted by Berk Ustun
scheon.com
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
Reposted by Berk Ustun
snagaraj.bsky.social
Many ML models predict labels that don’t reflect what we care about, e.g.:
– Diagnoses from unreliable tests
– Outcomes from noisy electronic health records

In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
🧵👇
Reposted by Berk Ustun
berkustun.bsky.social
is this a rhetorical question?
Reposted by Berk Ustun
jlkoepke.bsky.social
🧵on the CFPB and less discriminatory algorithms.

last week, in its supervisory highlights, the Bureau offered a range of impressive new details on how financial institutions should be searching for less discriminatory algorithms.
Reposted by Berk Ustun
jchi-ucsd.bsky.social

Engaging discussions on the future of #AI in #healthcare at this week's ICHPS, hosted by @amstatnews.bsky.social.

JCHI's @kdpsingh.bsky.social shared insights on the safety & equity of #MachineLearning algorithms and examined bias in large language models.