Michelle L. Ding
@michelleding.bsky.social
91 followers 58 following 17 posts
She/Her. CS PhD Researcher @ Brown. Sociotechnical AI Governance. https://michelle-ding.github.io/
Posts Media Videos Starter Packs
Reposted by Michelle L. Ding
valentinapy.bsky.social
💡We kicked off the SoLaR workshop at #COLM2025 with a great opinion talk by @michelleding.bsky.social & Jo Gasior Kavishe (joint work with @victorojewale.bsky.social and
@geomblog.bsky.social
) on "Testing LLMs in a sandbox isn't responsible. Focusing on community use and needs is."
Reposted by Michelle L. Ding
dippedrusk.com
Have you or a loved one been misgendered by an LLM? How can we evaluate LLMs for misgendering? Do different evaluation methods give consistent results?
Check out our preprint led by the newly minted Dr. @arjunsubgraph.bsky.social, and with Preethi Seshadri, Dietrich Klakow, Kai-Wei Chang, Yizhou Sun
Agree to Disagree? A Meta-Evaluation of LLM Misgendering
Numerous methods have been proposed to measure LLM misgendering, including probability-based evaluations (e.g., automatically with templatic sentences) and generation-based evaluations (e.g., with aut...
arxiv.org
michelleding.bsky.social
Hi #COLM2025! 🇨🇦 I will be presenting a talk on the importance of community-driven LLM evaluations based on an opinion abstract I wrote with Jo Kavishe, @victorojewale.bsky.social and @geomblog.bsky.social tomorrow at 9:30am in 524b for solar-colm.github.io

Hope to see you there!
Third Workshop on Socially Responsible Language Modelling Research (SoLaR) 2025
COLM 2025 in-person Workshop, October 10th at the Palais des Congrès in Montreal, Canada
solar-colm.github.io
Reposted by Michelle L. Ding
reniebird.bsky.social
I'll be presenting a position paper about consumer protection and AI in the US at ICML. I have a surprisingly optimistic take: our legal structures are stronger than I anticipated when I went to work on this issue in Congress.

Is everything broken rn? Yes. Will it stay broken? That's on us.
A poster for the paper "Position: Strong Consumer Protection is an Inalienable Defense for AI Safety in the United States"
Reposted by Michelle L. Ding
techpolicypress.bsky.social
With their 'Sovereignty as a Service' offerings, tech companies are encouraging the illusion of a race for sovereign control of AI while being the true powers behind the scenes, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
'Sovereignty' Myth-Making in the AI Race | TechPolicy.Press
Tech companies stand to gain by encouraging the illusion of a race for 'sovereign' AI, write Rui-Jie Yew, Kate Elizabeth Creasey, Suresh Venkatasubramanian.
www.techpolicy.press
Reposted by Michelle L. Ding
victorojewale.bsky.social
Excited to present "Towards AI Accountability Infrastructure: Gaps and Opportunities in AI Audit Tooling" at #CHI2025 tomorrow(today)!

🗓 Tue, 29 Apr | 9:48–10:00 AM JST (Mon, 28 Apr | 8:48–9:00 PM ET)
📍 G401 (Pacifico North 4F)

📄 dl.acm.org/doi/10.1145/...
michelleding.bsky.social
all welcomes are good welcomes even with a 4 year delay 👍
michelleding.bsky.social
Also...super grateful & happy to be continuing my research journey at Brown in a CS PhD under @geomblog.bsky.social and @harinisuresh.bsky.social 🌱Many papers to come (?)
Reposted by Michelle L. Ding
harinisuresh.bsky.social
@michelleding.bsky.social has been doing amazing work laying out the complex landscape of "deepfake porn" and distilling the unique challenges in governing it. We hope this work informs future AI governance efforts to address the severe harms of this content - reach out to us to chat more!
michelleding.bsky.social
Any synthetic content governance framework or method that aims to target AIG-NCII of adults should also consider their applicability to the malicious technical ecosystem and the three limitations we point out above. More during our presentation at chi-staig.github.io!
STAIG @ CHI 2025
chi-staig.github.io
michelleding.bsky.social
3. Adult AIG-NCII specific methods often focus on red teaming general purpose image generation models eg Stable Diffusion. While this is also important, there is an erroneous assumption of a "trustworthy technology" and "malicious users" when in fact the tech itself here is maliciously designed
michelleding.bsky.social
2. AIG-NCII prevention methods often conflate child sexual abuse materials (CSAM) with NCII of adults. But methods for CSAM (that often work with law enforcement databases) won't be as effective for adults due to different legal protections
michelleding.bsky.social
1. Transparency methods are very common in synthetic content governance frameworks but they are not enough. DF porn is not the same as other eg. political deepfakes bc human detectable images are still harmful. In fact, many of the deepfake creators label images as "fake" to avoid accountability
michelleding.bsky.social
Then we review the current landscape of synthetic content governance methods as recorded by NIST AI 100-4 nvlpubs.nist.gov/nistpubs/ai/... and show 3 key limitations in governing AIG-NCII of adults
nvlpubs.nist.gov
michelleding.bsky.social
Technical prevention is then a challenge for sociotechnical AI governance. In our paper, we break down and map what we call the "malicious technical ecosystem" that is used to create AIG-NCII of adults, including open-source face swapping models and 200+ "nudifier" apps that are free & easy to use
michelleding.bsky.social
There is a lot of work on responding to AIG-NCII through improved take down mechanisms (eg Take It Down Act) and legal recourse for survivors (eg DEFIANCE Act), but response without prevention places the burden of removal and justice-seeking on survivors and does nothing to stop the creation of NCII
michelleding.bsky.social
AI generated NCII is a form of image based sexual abuse that results in severe mental, physical, financial, and reputational damage as well as a gendered chilling effect. myimagemychoice.org is an organization that documents this extensively through survivor testimonials
Home - #MyImageMyChoice
myimagemychoice.org