Willie Agnew
@willie-agnew.bsky.social
630 followers 800 following 85 posts
Queer in AI 🏳️‍🌈 | postdoc at cmu HCII | ostem |william-agnew.com | views my own | he/they
Posts Media Videos Starter Packs
Reposted by Willie Agnew
queerinai.com
We’re grateful to have been able to help these scientists, engineers, and future medical professionals on their journeys, and want to help more! Please share this widely with your colleagues and networks to help us get this aid to those who need it. 5/5
Reposted by Willie Agnew
queerinai.com
Our Application Financial Aid Program provided over $250,000 to more than 250 LGBTQIA+ scholars from over 30 countries, allowing them to apply to grad and medical schools in the first place, apply to more schools, and help them keep paying for rent, groceries, and other essentials. 4/5
Reposted by Willie Agnew
queerinai.com
Applying to graduate schools is expensive. Queer in AI and oSTEM have been running the Financial Aid Program since 2020, aiming to alleviate the burden of application and test fees for queer STEM scholars applying to graduate programs. Applicants from all countries are welcomed. 3/5
Reposted by Willie Agnew
queerinai.com
To make this program a grand success, and to ensure the most impact possible, please consider donating to support our cause at www.paypal.com/donate/?host... 2/5
Donate to oSTEM Incorporated
Help support oSTEM Incorporated by donating or sharing with your friends.
www.paypal.com
Reposted by Willie Agnew
queerinai.com
We are launching our Graduate School Application Financial Aid Program (www.queerinai.com/grad-app-aid) for 2025-2026. We’ll give up to $750 per person to LGBTQIA+ STEM scholars applying to graduate programs. Apply at openreview.net/group?id=Que.... 1/5
Grad App Aid — Queer in AI
www.queerinai.com
Reposted by Willie Agnew
queerinai.com
Attending COLM next week in Montreal? 🇨🇦 Join us on Thursday for a 2-part social! ✨ 5:30-6:30 at the conference venue and 7:00-10:00 offsite! 🌈 Sign up here: forms.gle/oiMK3TLP8ZZc...
Queer in AI @ COLM 2025. Thursday, October 9 5:30 to 10 pm Eastern Time. There is a QR code to sign up which is linked in the post.
willie-agnew.bsky.social
There are a lot of programs that say they are open to anyone with a PhD but only accept faculty 😑
Reposted by Willie Agnew
sauvik.me
📣 Accepted to #AIES2025: What do the audio datasets powering generative audio models actually contain? (led by @willie-agnew.bsky.social)

Answer: Lots of old audio content that is mostly English, often biased, and of dubious copyright / permissioning status.

Paper: www.sauvik.me/papers/65/s...
willie-agnew.bsky.social
This is never going to stop as long as these misinfo/propaganda giants exist.
Reposted by Willie Agnew
joachimbaumann.bsky.social
🚨 New paper alert 🚨 Using LLMs as data annotators, you can produce any scientific result you want. We call this **LLM Hacking**.

Paper: arxiv.org/pdf/2509.08825
We present our new preprint titled "Large Language Model Hacking: Quantifying the Hidden Risks of Using LLMs for Text Annotation".
We quantify LLM hacking risk through systematic replication of 37 diverse computational social science annotation tasks.
For these tasks, we use a combined set of 2,361 realistic hypotheses that researchers might test using these annotations.
Then, we collect 13 million LLM annotations across plausible LLM configurations.
These annotations feed into 1.4 million regressions testing the hypotheses. 
For a hypothesis with no true effect (ground truth $p > 0.05$), different LLM configurations yield conflicting conclusions.
Checkmarks indicate correct statistical conclusions matching ground truth; crosses indicate LLM hacking -- incorrect conclusions due to annotation errors.
Across all experiments, LLM hacking occurs in 31-50\% of cases even with highly capable models.
Since minor configuration changes can flip scientific conclusions, from correct to incorrect, LLM hacking can be exploited to present anything as statistically significant.
Reposted by Willie Agnew
abeba.bsky.social
the secret to getting large orgs to not post pics of your keynote is to wear the kiffiyeh on stage
Reposted by Willie Agnew
kashana.blacksky.app
I hate Supreme Court dissent culture. I think it made the left ok with moral victories instead of power.
Reposted by Willie Agnew
anthonymoser.com
I considered writing a long carefully constructed argument laying out the harms and limitations of AI, but instead I wrote about being a hater. Only humans can be haters.
I Am An AI Hater
I am an AI hater. This is considered rude, but I do not care, because I am a hater.
anthonymoser.github.io
Reposted by Willie Agnew
jasonkoebler.bsky.social
It is deeply selfish to settle this case, as surely most of the AI copyright lawsuits are going to be settled. The fact that the vast majority of lawsuits in this country are settled before tech giants face any real or consequences is such a travesty www.wired.com/story/anthro...
Anthropic Settles High-Profile AI Copyright Lawsuit Brought by Book Authors
Anthropic faced the prospect of more than $1 trillion in damages, a sum that could have threatened the company’s survival if the case went to trial.
www.wired.com
Reposted by Willie Agnew
norabeckermd.bsky.social
Incredibly disappointed in my institution @umich.edu for caving to political pressure and choosing to stop offering gender-affirming care to minors.

Not excusable.
Reposted by Willie Agnew
abeba.bsky.social
"ChatGPT's sycophantic design led it to validate his most dangerous thoughts. When he expressed suicidal ideation, instead of challenging these thoughts or redirecting the conversation, the system would affirm & even romanticize his feelings." centerforhumanetechnology.substack.com/p/the-raine-...
The Raine v OpenAI Case: Engineering Addiction by Design
The Deliberate Design Patterns That Made ChatGPT Dangerous
centerforhumanetechnology.substack.com
Reposted by Willie Agnew
abeba.bsky.social
"The idea of robot/AI rights acts as a smoke screen, allowing theorists and futurists to fantasize about benevolently sentient machines with unalterable needs and desires protected by law." firstmonday.org/ojs/index.ph...
willie-agnew.bsky.social
The deadline for the Algorithmic Collective Action Workshop at NeurIPS'25 has been extended to August 29th! Please consider submitting work about power, control, resistance, and AI: acaworkshop.github.io
About the workshop – ACA@NeurIPS
acaworkshop.github.io
willie-agnew.bsky.social
I often hear that critical AI researchers should start providing more solutions, but I rarely hear that uncritical AI researchers should start causing fewer problems.
willie-agnew.bsky.social
cw: suicide
Mental healthcare in the US is broken and into that are entering unregulated chatbots posing as therapists that fundamentally cannot do everything a therapist can, and it's contributing to a lot of ongoing harm www.nytimes.com/2025/08/18/o...
Opinion | What My Daughter Told ChatGPT Before She Took Her Life
www.nytimes.com
Reposted by Willie Agnew
davidthewid.bsky.social
really frustrating to watch so many faculty going "omg ICE IS BAD! we must stop them!" after we fought for years to ban Palantir from recruiting on campus to build ICE's tools.
Reposted by Willie Agnew
wells.bsky.social
just woke up to the news & there shouldn't be any question that we're fully in fascist authoritarianism right now. we're here already.
willie-agnew.bsky.social
Still making Obama white too
bobkopp.net
“A legitimate PhD-level expert in anything,” they said.

“Show me a diagram of the US presidents since FDR, with their names and years in office under their photos,” I said.
A GPT-5 generated diagram of US presidents since Franklin D. “Roesevelt” to “Goorge W Bush”, “Baruck Obama” and Donald Trump. No George HW Bush (though Ronald Reagan looks a fair bit like him), LBJ, or Joe Biden.