Erica Chiang
@ericachiang.bsky.social
81 followers 80 following 13 posts
CS PhD student at Cornell :) CMU CS ‘23 https://erica-chiang.github.io
Posts Media Videos Starter Packs
Pinned
ericachiang.bsky.social
I’m really excited to share the first paper of my PhD, “Learning Disease Progression Models That Capture Health Disparities” (accepted at #CHIL2025)! ✨ 1/

📄: arxiv.org/abs/2412.16406
ericachiang.bsky.social
CONGRATS this is so exciting!!!
ericachiang.bsky.social
aww thank you!!! you too for your best paper 😌🫶🏼
ericachiang.bsky.social
Ahh thank you! ☺️
ericachiang.bsky.social
I can’t believe I’m saying this: our work received a Best Paper Award at #CHIL2025!! So so excited and grateful 🥰 Looking forward to day 2 of the conference with these awesome people :)
Reposted by Erica Chiang
nkgarg.bsky.social
I wrote about science cuts and my family's immigration story as part of The McClintock Letters organized by @cornellasap.bsky.social. Haven't yet placed it in a Houston-based newspaper but hopefully it's useful here

gargnikhil.com/posts/202506...
Science and immigration cuts · Nikhil Garg
gargnikhil.com
Reposted by Erica Chiang
dmshanmugam.bsky.social
New work 🎉: conformal classifiers return sets of classes for each example, with a probabilistic guarantee the true class is included. But these sets can be too large to be useful.

In our #CVPR2025 paper, we propose a method to make them more compact without sacrificing coverage.
A gif explaining the value of test-time augmentation to conformal classification. The video begins with an illustration of TTA reducing the size of the  predicted set of classes for a dog image, and goes on to explain that this is because TTA promotes the true class's predicted probability to be higher, even when it's predicted to be unlikely.
ericachiang.bsky.social
I really enjoyed (and learned a LOT from) working on this project with these wonderful co-authors:
@dmshanmugam.bsky.social
Ashley Beecy
Gabriel Sayer
@destrin.bsky.social
@nkgarg.bsky.social
@emmapierson.bsky.social
7/7
ericachiang.bsky.social
Our work underscores the importance of accounting for health disparities; we lay a foundation for doing so with a method to (1) estimate disease severity in the presence of health disparities and (2) identify disparity patterns that can inform public health interventions. 6/
ericachiang.bsky.social
The interpretability and identifiability of our model also allow us to learn fine-grained descriptions of disparities. Fitting our model on heart failure patient data from NewYork-Presbyterian, our model identifies groups that face each type of health disparity. 5/
ericachiang.bsky.social
We prove that *failing to* account for these disparities biases severity estimates. By jointly accounting for all three, our model more accurately recovers severity. Indeed, accounting for these disparities in real heart failure data does meaningfully shift severity estimates. 4/
ericachiang.bsky.social
We propose an interpretable disease progression model that captures 3 key disparities: certain patient groups may (1) start receiving care at higher disease severity levels, (2) experience faster disease progression, or (3) receive less frequent care conditional on severity. 3/
ericachiang.bsky.social
Disease progression models are often used to help healthcare providers diagnose and treat chronic diseases. But these models have historically failed to account for health disparities that bias the data they are trained on. 2/
ericachiang.bsky.social
I’m really excited to share the first paper of my PhD, “Learning Disease Progression Models That Capture Health Disparities” (accepted at #CHIL2025)! ✨ 1/

📄: arxiv.org/abs/2412.16406
Reposted by Erica Chiang
emmapierson.bsky.social
The US government recently flagged my scientific grant in its "woke DEI database". Many people have asked me what I will do.

My answer today in Nature.

We will not be cowed. We will keep using AI to build a fairer, healthier world.

www.nature.com/articles/d41...
My ‘woke DEI’ grant has been flagged for scrutiny. Where do I go from here?
My work in making artificial intelligence fair has been noticed by US officials intent on ending ‘class warfare propaganda’.
www.nature.com
ericachiang.bsky.social
check out the findings from our #dogathon 😍🐶 !!
kennypeng.bsky.social
Our lab had a #dogathon 🐕 yesterday where we analyzed NYC Open Data on dog licenses. We learned a lot of dog facts, which I’ll share in this thread 🧵

1) Geospatial trends: Cavalier King Charles Spaniels are common in Manhattan; the opposite is true for Yorkshire Terriers.
Reposted by Erica Chiang
gsagostini.bsky.social
Migration data lets us study responses to environmental disasters, social change patterns, policy impacts, etc. But public data is too coarse, obscuring these important phenomena!

We build MIGRATE: a dataset of yearly flows between 47 billion pairs of US Census Block Groups. 1/5
Reposted by Erica Chiang
harold.bsky.social
Excited to announce a new preprint from my lab (with @rishi-jha.bsky.social and Vitaly Shmatikov; my first as a first author!) about severe security vulnerabilities in LLM-based multi-agent systems:

“Multi-Agent Systems Execute Arbitrary Malicious Code”

arxiv.org/abs/2503.12188

1/12
A screenshot of the abstract of the paper, detailing our findings that several multi-agent frameworks can be hijacked to enable a complete security breach.
Reposted by Erica Chiang
kennypeng.bsky.social
(1/n) New paper/code! Sparse Autoencoders for Hypothesis Generation

HypotheSAEs generates interpretable features of text data that predict a target variable: What features predict clicks from headlines / party from congressional speech / rating from Yelp review?

arxiv.org/abs/2502.04382
Reposted by Erica Chiang
rajmovva.bsky.social
💡New preprint & Python package: We use sparse autoencoders to generate hypotheses from large text datasets.

Our method, HypotheSAEs, produces interpretable text features that predict a target variable, e.g. features in news headlines that predict engagement. 🧵1/
Reposted by Erica Chiang
sjgreenwood.bsky.social
Please repost to get the word out! @nkgarg.bsky.social and I are excited to present a personalized feed for academics! It shows posts about papers from accounts you’re following bsky.app/profile/pape...
Reposted by Erica Chiang
sjgreenwood.bsky.social
I'm excited to use my first post here to introduce the first paper of my PhD, "User-item fairness tradeoffs in recommendations" (NeurIPS 2024)!

This is joint work with Sudalakshmee Chiniah and my advisor @nkgarg.bsky.social

Description/links below: 1/