Sujay Nagaraj
@snagaraj.bsky.social
60 followers 98 following 20 posts
MD/PhD student | University of Toronto | Machine Learning for Health
Posts Media Videos Starter Packs
Reposted by Sujay Nagaraj
scheon.com
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
snagaraj.bsky.social
We’ll be at #ICLR2025, Poster Session 1 – #516!
Come chat if you’re interested in learning more!

This is work done with wonderful collaborators: Yang Liu, @fcalmon.bsky.social, and @berkustun.bsky.social.
snagaraj.bsky.social
Our algorithm can improve safety and performance by flagging regretful predictions for abstention or data cleaning.

For example, we demonstrate that, by abstaining from prediction using our algorithm, we can reduce mistakes compared to standard approaches:
snagaraj.bsky.social
We develop a method that trains models over plausible clean datasets to anticipate regretful predictions, helping us spot when a model is unreliable at the individual-level.
snagaraj.bsky.social
We capture this effect with a simple measure: regret.

Regret is inevitable with label noise, but it can tell us where models silently fail, and how we can guide safer predictions
snagaraj.bsky.social
This lottery breaks modern ML:

If we can’t tell which predictions are wrong, we can’t improve models, we can’t debug, and we can’t trust them in high-stakes tasks like healthcare.
snagaraj.bsky.social
We can frame this problem as learning from noisy labels.

Plenty of algorithms have been designed to handle label noise by predicting well on average, but we show how they still fail on specific individuals.
snagaraj.bsky.social
Many ML models predict labels that don’t reflect what we care about, e.g.:
– Diagnoses from unreliable tests
– Outcomes from noisy electronic health records

In a new paper w/@berkustun, we study how this subjects individuals to a lottery of mistakes.
Paper: bit.ly/3Y673uZ
🧵👇
snagaraj.bsky.social
We’ll be at #ICLR2025, Poster Session 1 – #516!
Come chat if you’re interested in learning more! This is work done with wonderful collaborators: Yang Liu, @fcalmon.bsky.social, and @berkustun.bsky.social
snagaraj.bsky.social
Our algorithm can improve safety and performance by flagging regretful predictions for abstention or for data cleaning. For example, we demonstrate how abstaining from prediction on these instances can reduce mistakes compared to standard approaches:
snagaraj.bsky.social
We develop a method to anticipate regretful predictions by training models over plausible clean datasets.

This helps us spot when a model is unreliable at the individual-level.
snagaraj.bsky.social
We capture this effect with a simple measure: regret.

Regret is inevitable with label noise -- it tells us where models silently fail, and how we can guide safer predictions.
snagaraj.bsky.social
This lottery breaks modern ML:

If we can’t tell which predictions are wrong, we can’t improve models, we can’t debug, and we can’t trust them in high-stakes tasks like healthcare.
snagaraj.bsky.social
We can frame this as learning from noisy labels.

Plenty of algorithms have been designed to handle label noise by predicting well on average —
But we show how they can still fail on specific individuals.
snagaraj.bsky.social
🧠 Key takeaway: Label noise isn’t static—especially in time series.

💬 Come chat with me at #ICLR2025 Poster Session 2!

Shoutout to my amazing colleagues behind this work:
@tomhartvigsen.bsky.social
@berkustun.bsky.social
snagaraj.bsky.social
🔬 Real-world demo:
We applied our method to stress detection from smartwatches where we have noisy self-reported labels vs. clean physiological measures.

📈 Our model tracks the true time-varying label noise—reducing test error over baselines.
snagaraj.bsky.social
We propose methods to learn this function directly from noisy data.

💥 Results:
On 4 real-world time series tasks:

✅ Temporal methods beat static baselines
✅ Our methods better approximate the true noise function
✅ They work when the noise function is unknown!
snagaraj.bsky.social
📌 We formalize this setting:
A temporal label noise function defines how likely each true label is to be flipped—as a function of time.

Using this function, we propose a new time series loss function that is provably robust to label noise.
snagaraj.bsky.social
🕒 What is temporal label noise?

In many real-world time series (e.g., wearables, EHRs), label quality fluctuates over time
➡️ Participants fatigue
➡️ Clinicians miss more during busy shifts
➡️ Self-reports drift seasonally

Existing methods assume static noise → they fail here
snagaraj.bsky.social
Would be great to be added :)