Adina Williams
@adinawilliams.bsky.social
760 followers 450 following 32 posts
NLP, Linguistics, Cognitive Science, AI, ML, etc. Job currently: Research Scientist (NYC) Job formerly: NYU Linguistics, MSU Linguistics
Posts Media Videos Starter Packs
Pinned
adinawilliams.bsky.social
Our team is hiring a postdoc in (mechanistic) interpretability! The ideal candidate will have research experience in interpretability for text and/or image generation models and be excited about open science!

Please consider applying or sharing with colleagues: metacareers.com/jobs/2223953961352324
careers.com
Reposted by Adina Williams
kmahowald.bsky.social
UT Austin Linguistics is hiring in computational linguistics!

Asst or Assoc.

We have a thriving group sites.utexas.edu/compling/ and a long proud history in the space. (For instance, fun fact, Jeff Elman was a UT Austin Linguistics Ph.D.)

faculty.utexas.edu/career/170793

🤘
UT Austin Computational Linguistics Research Group – Humans processing computers processing humans processing language
sites.utexas.edu
adinawilliams.bsky.social
I agree this thread's headline claim seems premature. Let me add our recent ACL Findings paper, with Dexter Ju and @hagenblix.bsky.social, which found syntactic simplification in at least some LMs, in a novel domain regeneration setting: aclanthology.org/2025.finding...
aclanthology.org
adinawilliams.bsky.social
Our team is hiring a postdoc in (mechanistic) interpretability! The ideal candidate will have research experience in interpretability for text and/or image generation models and be excited about open science!

Please consider applying or sharing with colleagues: metacareers.com/jobs/2223953961352324
careers.com
Reposted by Adina Williams
hagenblix.bsky.social
Ingeborg and I wrote a thing about "hype", and why we think that framing AI through that lens is increasingly inadequate - check it out!
Deflating “Hype” Won’t Save Us
By Hagen Blix & Ingeborg Glimmer
hagenblix.github.io
adinawilliams.bsky.social
My spouse's new book has a chapter getting into the weeds around AI and high paid jobs...AI powered deskilling, drawing historical connections back as far as the steam engine, economic reasoning pushing technical innovation etc. Check it 👇 I think you might like it.

bsky.app/profile/hage...
hagenblix.bsky.social
I wrote a book about AI, AI Fears, and Capitalism with my friend Ingeborg!
"Why We Fear AI" just went to the printers and comes out in March! You can pre-order it directly at the publisher @commonnotions.bsky.social or wherever you get your books
Quick🧵
Why We Fear AI — Common Notions Press
www.commonnotions.org
Reposted by Adina Williams
blackboxnlp.bsky.social
Have you heard about this year's shared task? 📢

Mechanistic Interpretability (MI) is quickly advancing, but comparing methods remains a challenge. This year at #BlackboxNLP, we're introducing a shared task to rigorously evaluate MI methods in language models 🧵
adinawilliams.bsky.social
Come by to our panel at APS to share your thoughts, and ask us all the hard stuff!
adinawilliams.bsky.social
Check it out! Guy et al. explores the impact of format on function vectors, and invites further conversation about what it would mean to have universal goal representations in LLMs.

(I've hung around interp communities for a while but this is my 1st mech-interp project. feedback much appreciated!)
guydav.bsky.social
New preprint alert! We often prompt ICL tasks using either demonstrations or instructions. How much does the form of the prompt matter to the task representation formed by a language model? Stick around to find out 1/N
adinawilliams.bsky.social
Awesome, can't wait to read it; congrats Maya!
adinawilliams.bsky.social
This is such a fun example of LM weirdness (which also shows how they match form over fact!)

More linguistically: it looks like ending a query with "meaning" triggers the bot to accommodate the presupposition that the input contains an idiom! (Hard to run normal preposition tests here tho)
gregjenner.bsky.social
Someone on Threads noticed you can type any random sentence into Google, then add “meaning” afterwards, and you’ll get an AI explanation of a famous idiom or phrase you just made up. Here is mine

• Al Overview
The idiom "you can't lick a badger twice" means you can't trick or deceive someone a second time after they've been tricked once. It's a warning that if someone has already been deceived, they are unlikely to fall for the same trick again.
Here's a more detailed explanation:
• Licking: "Licking" in this context means to trick or deceive someone.
• Badger: The badger is a wild animal, and the phrase likely originates from the historical sport of badger baiting where dogs were used to harass
adinawilliams.bsky.social
Fantastic news, congrats to you and to BU 🎉
adinawilliams.bsky.social
Content I'm so here for 🤩
adinawilliams.bsky.social
Happy to share that the paper describing the AILuminate v1.0 benchmark is now out! arxiv.org/abs/2503.05731
The benchmark is designed with @mlcommons.org to assess LLM risk and reliability across 12 hazard categories. AILuminate is available for testing models and helping ensure safer deployment!
AILuminate: Introducing v1.0 of the AI Risk and Reliability Benchmark from MLCommons
The rapid advancement and deployment of AI systems have created an urgent need for standard safety-evaluation frameworks. This paper introduces AILuminate v1.0, the first comprehensive industry-standa...
arxiv.org
Reposted by Adina Williams
hagenblix.bsky.social
Our book "Why We Fear AI" is out today! Hopefully it can help make sense out of some of the terrifying stuff that's happening these days, and what AI and capitalism have to do with it!

Get it directly from the publisher or wherever you get your books!
www.commonnotions.org/why-we-fear-ai
Why We Fear AI — Common Notions Press
www.commonnotions.org
Reposted by Adina Williams
commonnotions.bsky.social
WHY WE FEAR AI is out today!

Industry insiders @hagenblix.bsky.social and Ingeborg Glimmer dive into the dark, twisted world of AI to demystify the many nightmares we have about it. One of the best ways to face your fear is to confront it—order a copy of WHY WE FEAR AI: buff.ly/1tWhkx8
Graphic featuring artwork from Why We Fear AI's cover. A black and white hand is reaching out toward text that says" We'll see how capitalism and class shape both the actual tools called ai and the ai nightmares."
adinawilliams.bsky.social
Another aspect that appears notable to me is the removal of explicit evaluation and auditing language. Maybe it's my bias, being an eval person, but seeking and sharing evidence that the models actually work to our specifications seems pretty important.
adinawilliams.bsky.social
My spouse co-wrote a book!

It's about AI and what peoples' fears about it actually mean.

Go check it out 👇
hagenblix.bsky.social
Wow, authors' copies have just arrived! So cool (and kinda strange lol) to see our work in print!
Amazing job from @commonnotions.bsky.social! Love the cover design from Josh MacPhee <3

Get a copy here:
www.commonnotions.org/why-we-fear-ai
A book with a six-fingered hand (designed by Josh MacPhee), titled "Why We Fear AI: On the Interpretation of Nightmares"