Angelina Wang
@angelinawang.bsky.social
1.9K followers 350 following 18 posts
Asst Prof at Cornell Info Sci and Cornell Tech. Responsible AI https://angelina-wang.github.io/
Posts Media Videos Starter Packs
Pinned
angelinawang.bsky.social
Have you ever felt that AI fairness was too strict, enforcing fairness when it didn’t seem necessary? How about too narrow, missing a wide range of important harms? We argue that the way to address both of these critiques is to discriminate more 🧵
angelinawang.bsky.social
Grateful to win Best Paper at ACL for our work on Fairness through Difference Awareness with my amazing collaborators!! Check out the paper for why we think fairness has both gone too far, and at the same time, not far enough aclanthology.org/2025.acl-lon...
Reposted by Angelina Wang
jacyanthis.bsky.social
@angelinawang.bsky.social presents the "Fairness through Difference Awareness" benchmark. Fairness tests require no discrimination...

but the law supports many forms of discrimination! E.g., synagogues should hire Jewish rabbis. LLMs often get this wrong aclanthology.org/2025.acl-lon... #ACL2025NLP
Angelina Wang presents the benchmark with Jewish synagogue hiring as an example.
Reposted by Angelina Wang
rajiinio.bsky.social
Was beyond disappointed to see this in the AI Action Plan. Messing with the NIST RMF (which many private & public institutions currently rely on) feels like a cheap shot
Reposted by Angelina Wang
aolteanu.bsky.social
We have to talk about rigor in AI work and what it should entail. The reality is that impoverished notions of rigor do not only lead to some one-off undesirable outcomes but can have a deeply formative impact on the scientific integrity and quality of both AI research and practice 1/
Print screen of the first page of a paper pre-print titled "Rigor in AI: Doing Rigorous AI Work Requires a Broader, Responsible AI-Informed Conception of Rigor" by Olteanu et al.  Paper abstract: "In AI research and practice, rigor remains largely understood in terms of methodological rigor -- such as whether mathematical, statistical, or computational methods are correctly applied. We argue that this narrow conception of rigor has contributed to the concerns raised by the responsible AI community, including overblown claims about AI capabilities. Our position is that a broader conception of what rigorous AI research and practice should entail is needed. We believe such a conception -- in addition to a more expansive understanding of (1) methodological rigor -- should include aspects related to (2) what background knowledge informs what to work on (epistemic rigor); (3) how disciplinary, community, or personal norms, standards, or beliefs influence the work (normative rigor); (4) how clearly articulated the theoretical constructs under use are (conceptual rigor); (5) what is reported and how (reporting rigor); and (6) how well-supported the inferences from existing evidence are (interpretative rigor). In doing so, we also aim to provide useful language and a framework for much-needed dialogue about the AI community's work by researchers, policymakers, journalists, and other stakeholders."
Reposted by Angelina Wang
hannawallach.bsky.social
Alright, people, let's be honest: GenAI systems are everywhere, and figuring out whether they're any good is a total mess. Should we use them? Where? How? Do they need a total overhaul?

(1/6)
angelinawang.bsky.social
I’ll be at both FAccT in Athens and ACL in Vienna this summer presenting these works, come say hi :)
angelinawang.bsky.social
2. 𝗿𝗮𝗰𝗶𝘀𝗺 ≠ 𝘀𝗲𝘅𝗶𝘀𝗺 ≠ 𝗮𝗯𝗹𝗲𝗶𝘀𝗺 ≠ … At FAccT 2025: Different oppressions manifest differently, and that matters for AI. Ex: neighborhoods segregate by race, but rarely by sex, shaping the harms we should target. arxiv.org/abs/2505.04038
Screenshot of paper title and author: "Identities are not Interchangeable: The Problem of Overgeneralization in Fair Machine Learning" by Angelina Wang
angelinawang.bsky.social
Instead, we should permit differentiating based on the context. Ex: synagogues in America are legally allowed to discriminate by religion when hiring rabbis. Work with Michelle Phan, Daniel E. Ho, @sanmikoyejo.bsky.social arxiv.org/abs/2502.01926
angelinawang.bsky.social
1. 𝗳𝗮𝗶𝗿𝗻𝗲𝘀𝘀 ≠ 𝘁𝗿𝗲𝗮𝘁𝗶𝗻𝗴 𝗴𝗿𝗼𝘂𝗽𝘀 𝘁𝗵𝗲 𝘀𝗮𝗺𝗲. At ACL 2025 Main: We diagnose issues like Google Gemini’s racially diverse Nazis to be a result of equating fairness with racial color-blindness, erasing important group differences.
Screenshot of paper title and author list: "Fairness through Difference Awareness: Measuring Desired Group Discrimination in LLMs" by Angelina Wang, Michelle Phan, Daniel E. Ho, Sanmi Koyejo
angelinawang.bsky.social
In the pursuit of convenient definitions and equations of fairness that scale, we have abstracted away too much social context, enforcing equality between any and all groups. In two new works, we push back against two pervasive and pernicious assumptions:
angelinawang.bsky.social
Have you ever felt that AI fairness was too strict, enforcing fairness when it didn’t seem necessary? How about too narrow, missing a wide range of important harms? We argue that the way to address both of these critiques is to discriminate more 🧵
Reposted by Angelina Wang
emmapierson.bsky.social
The US government recently flagged my scientific grant in its "woke DEI database". Many people have asked me what I will do.

My answer today in Nature.

We will not be cowed. We will keep using AI to build a fairer, healthier world.

www.nature.com/articles/d41...
My ‘woke DEI’ grant has been flagged for scrutiny. Where do I go from here?
My work in making artificial intelligence fair has been noticed by US officials intent on ending ‘class warfare propaganda’.
www.nature.com
angelinawang.bsky.social
If you work in ML fairness, perhaps you tend to get asked similar sets of questions from ML-focused folks, such as what is the best definition or equation for fairness. For those interested, please read, and for those often asked these questions, feel free to pass on the site!
angelinawang.bsky.social
I've recently put together a "Fairness FAQ": tinyurl.com/fairness-faq. If you work in non-fairness ML and you've heard about fairness, perhaps you've wondered things like what the best definitions of fairness are, and whether we can train algorithms that optimize for it.
Reposted by Angelina Wang
nkgarg.bsky.social
*Please repost* @sjgreenwood.bsky.social and I just launched a new personalized feed (*please pin*) that we hope will become a "must use" for #academicsky. The feed shows posts about papers filtered by *your* follower network. It's become my default Bluesky experience bsky.app/profile/pape...
Reposted by Angelina Wang
koloskova.bsky.social
I am excited to announce that I will join the University of Zurich as an assistant professor in August this year! I am looking for PhD students and postdocs starting from the fall.

My research interests include optimization, federated learning, machine learning, privacy, and unlearning.
Reposted by Angelina Wang
mollyjongfast.bsky.social
Cutting $880 billion from Medicaid is going to have a lot of devastating consequences for a lot of people
angelinawang.bsky.social
Yes these are good points, and thanks for the pointer! But the trajectory does seem to be towards LLMs replacing human participants in certain cases. The presence of these companies, for instance, to me signal real world use: www.syntheticusers.com, synthetic-humans.ai
Reposted by Angelina Wang
leahgreenberg.bsky.social
the richest man in the world has decided that your kids don't deserve special education programs
A screenshot of an email notifying Spotsylvania parents that funding for a program for youth with disabilities has been canceled
angelinawang.bsky.social
Our results differ from work that affirmatively shows LLMs can simulate human participants. We test if LLMs can match the distribution of human responses — not just the mean — and use more realistic free responses instead of multiple choice. The details matter!
angelinawang.bsky.social
Training data phrases like “Black women” are more often used in text *about* a group rather than *from* a group, so that outputs to LLM prompts like “You are a Black woman” more resemble what out-group members think a group is like than what in-group members are actually like.
angelinawang.bsky.social
Our new piece in Nature Machine Intelligence: LLMs are replacing human participants, but can they simulate diverse respondents? Surveys use representative sampling for a reason, and our work shows how LLM training prevents accurate simulation of different human identities.
Reposted by Angelina Wang
kjfeng.me
📢📢 Introducing the 1st workshop on Sociotechnical AI Governance at CHI’25 (STAIG@CHI'25)! Join us to build a community to tackle AI governance through a sociotechnical lens and drive actionable strategies.

🌐 Website: chi-staig.github.io
🗓️ Submit your work by: Feb 17, 2025
A poster advertising the first workshop on sociotechnical AI governance with a description of the workshop's core themes and faces of the organizers.