Aparna Balagopalan
@aparnabee.bsky.social
80 followers 64 following 1 posts
PhD student @MIT. Previously: @Uoft/@VectorInst, @winterlightlabs, and @IITGuwahati
Posts Media Videos Starter Packs
aparnabee.bsky.social
Excited to share our work on the impact of reporting delays on disparity audits in healthcare 😄 Project started at the Stanford RegLab Summer Institute last year, and continued with an amazing group of collaborators! @jennahgosciak.bsky.social will present this at FAccT -- do drop by if attending!
jennahgosciak.bsky.social
I am presenting a new 📝 “Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments” at @facct.bsky.social on Thursday, with @aparnabee.bsky.social, Derek Ouyang, @allisonkoe.bsky.social, @marzyehghassemi.bsky.social, and Dan Ho. 🔗: arxiv.org/abs/2506.13735
(1/n)
"Bias Delayed is Bias Denied? Assessing the Effect of Reporting Delays on Disparity Assessments"

Conducting disparity assessments at regular time intervals is critical for surfacing potential biases in decision-making and improving outcomes across demographic groups. Because disparity assessments fundamentally depend on the availability of demographic information, their efficacy is limited by the availability and consistency of available demographic identifiers. While prior work has considered the impact of missing data on fairness, little attention has been paid to the role of delayed demographic data. Delayed data, while eventually observed, might be missing at the critical point of monitoring and action -- and delays may be unequally distributed across groups in ways that distort disparity assessments. We characterize such impacts in healthcare, using electronic health records of over 5M patients across primary care practices in all 50 states. Our contributions are threefold. First, we document the high rate of race and ethnicity reporting delays in a healthcare setting and demonstrate widespread variation in rates at which demographics are reported across different groups. Second, through a set of retrospective analyses using real data, we find that such delays impact disparity assessments and hence conclusions made across a range of consequential healthcare outcomes, particularly at more granular levels of state-level and practice-level assessments. Third, we find limited ability of conventional methods that impute missing race in mitigating the effects of reporting delays on the accuracy of timely disparity assessments. Our insights and methods generalize to many domains of algorithmic fairness where delays in the availability of sensitive information may confound audits, thus deserving closer attention within a pipeline-aware machine learning framework. Figure contrasting a conventional approach to conducting disparity assessments, which is static, to the analysis we conduct in this paper. Our analysis (1) uses comprehensive health data from over 1,000 primary care practices and 5 million patients across the U.S., (2) timestamped information on the reporting of race to measure delay, and (3) retrospective analyses of disparity assessments under varying levels of delay.
Reposted by Aparna Balagopalan
asiabiega.bsky.social
Our new -- open access -- #WWW2025 #TheWebConference2025 paper on algorithmic fairness in ranking, led by @aparnabee.bsky.social :

Until now, our fair ranking notions and interventions implicitly assumed that all exposure in the results to all queries is uniformly desirable.
What's in a Query: Polarity-Aware Distribution-Based Fair Ranking | Proceedings of the ACM on Web Conference 2025
You will be notified whenever a record that you have chosen has been cited.
dl.acm.org
Reposted by Aparna Balagopalan
angelinawang.bsky.social
Our new piece in Nature Machine Intelligence: LLMs are replacing human participants, but can they simulate diverse respondents? Surveys use representative sampling for a reason, and our work shows how LLM training prevents accurate simulation of different human identities.
Reposted by Aparna Balagopalan
kanarinka.bsky.social
Championing a Vision of Safe Cities that Centers Women and Trans-Queer Experiences -- our project featured on @mitdusp.bsky.social -- dusp.mit.edu/news/champio...
“A woman’s place is in a safe city: Designing feminist cities through Nirbhaya Funds” by Radhika Radhakrishnan in association with the MIT Data+Feminism Lab. In the backdrop, a train runs between Mumbai and Kolkata, with protesters holding up placards demanding justice and safety for cis-women, trans, and queer persons in both cities.
Reposted by Aparna Balagopalan
kanarinka.bsky.social
We are excited to launch "A Woman's Place is in a Safe City," a data story on the use of #NirbhayaFunds for digital surveillance in India, in collaboration with @mitdusp.bsky.social Data+Feminism Lab, POV Mumbai @thesafecityapp.bsky.social & 3 anonymised Kolkata-based NGOs. bit.ly/3EvqV3R 🧵Read on:
A woman’s place is in a safe city: Designing feminist cities through Nirbhaya Funds” by Radhika Radhakrishnan in association with the MIT Data+Feminism Lab. In the backdrop, a train runs between Mumbai and Kolkata, with protesters holding up placards demanding justice and safety for cis-women, trans, and queer persons in both cities.