Caleb Ziems
@calebziems.com
1.8K followers 480 following 8 posts
PhD student at Stanford NLP. Working on Social NLP and CSS. Previously at GaTech, Meta AI, Emory. 📍Palo Alto, CA 🔗 calebziems.com
Posts Media Videos Starter Packs
Reposted by Caleb Ziems
myra.bsky.social
AI always calling your ideas “fantastic” can feel inauthentic, but what are sycophancy’s deeper harms? We find that in the common use case of seeking AI advice on interpersonal situations—specifically conflicts—sycophancy makes people feel more right & less willing to apologize.
Screenshot of paper title: Sycophantic AI Decreases Prosocial Intentions and Promotes Dependence
Reposted by Caleb Ziems
emmharv.bsky.social
I am so excited to be in 🇬🇷Athens🇬🇷 to present "A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms" by me, @kizilcec.bsky.social, and @allisonkoe.bsky.social, at #FAccT2025!!

🔗: arxiv.org/pdf/2506.04419
A screenshot of our paper's:

Title: A Framework for Auditing Chatbots for Dialect-Based Quality-of-Service Harms
Authors: Emma Harvey, Rene Kizilcec, Allison Koenecke
Abstract: Increasingly, individuals who engage in online activities are expected to interact with large language model (LLM)-based chatbots. Prior work has shown that LLMs can display dialect bias, which occurs when they produce harmful responses when prompted with text written in minoritized dialects. However, whether and how this bias propagates to systems built on top of LLMs, such as chatbots, is still unclear. We conduct a review of existing approaches for auditing LLMs for dialect bias and show that they cannot be straightforwardly adapted to audit LLM-based chatbots due to issues of substantive and ecological validity. To address this, we present a framework for auditing LLM-based chatbots for dialect bias by measuring the extent to which they produce quality-of-service harms, which occur when systems do not work equally well for different people. Our framework has three key characteristics that make it useful in practice. First, by leveraging dynamically generated instead of pre-existing text, our framework enables testing over any dialect, facilitates multi-turn conversations, and represents how users are likely to interact with chatbots in the real world. Second, by measuring quality-of-service harms, our framework aligns audit results with the real-world outcomes of chatbot use. Third, our framework requires only query access to an LLM-based chatbot, meaning that it can be leveraged equally effectively by internal auditors, external auditors, and even individual users in order to promote accountability. To demonstrate the efficacy of our framework, we conduct a case study audit of Amazon Rufus, a widely-used LLM-based chatbot in the customer service domain. Our results reveal that Rufus produces lower-quality responses to prompts written in minoritized English dialects.
Reposted by Caleb Ziems
yutongzhang.bsky.social
AI companions aren’t science fiction anymore 🤖💬❤️
Thousands are turning to AI chatbots for emotional connection – finding comfort, sharing secrets, and even falling in love. But as AI companionship grows, the line between real and artificial relationships blurs.
Reposted by Caleb Ziems
williamheld.com
Introducing CAVA: The Comprehensive Assessment for Voice Assistants

A new benchmark for evaluating the capabilities required for speech-in-speech-out voice assistants!

- Latency
- Instruction following
- Function calling
- Tone awareness
- Turn taking
- Audio Safety

TalkArena.org/cava
Comprehensive Assessment for Voice Assistants
CAVA is a new benchmark for assessing how well Large Audio Models support voice assistant capabilities.
TalkArena.org
Reposted by Caleb Ziems
joelmire.bsky.social
Reward models for LMs are meant to align outputs with human preferences—but do they accidentally encode dialect biases? 🤔

Excited to share our paper on biases against African American Language in reward models, accepted to #NAACL2025 Findings! 🎉

Paper: arxiv.org/abs/2502.12858 (1/10)
Screenshot of Arxiv paper title, "Rejected Dialects: Biases Against African American Language in Reward Models," and author list: Joel Mire, Zubin Trivadi Aysola, Daniel Chechelnitsky, Nicholas Deas, Chrysoula Zerva, and Maarten Sap.
calebziems.com
EgoNormia (egonormia.org) exposes a major gap in Vision-Language Models understanding of the social world: they don't know how to behave when norms about the physical world *conflict* ⚔️ (<45% acc.)

But humans are naturally quite good at this (>90% acc.)

Check it out!

➡️ arxiv.org/abs/2502.20490
Reposted by Caleb Ziems
naitian.org
naitian @naitian.org · Feb 18
There's been a lot of work on "culture" in NLP, but not much agreement on what it is.

A position paper by me, @dbamman.bsky.social, and @ibleaman.bsky.social on cultural NLP: what we want, what we have, and how sociocultural linguistics can clarify things.

Website: naitian.org/culture-not-...

1/n
Culture is not trivia: sociocultural theory for cultural NLP. By Naitian Zhou and David Bamman from the Berkeley School of Information and Isaac L. Bleaman from Berkeley Linguistics.
Reposted by Caleb Ziems
echoshao8899.bsky.social
LM agents today primarily aim to automate tasks. Can we turn them into collaborative teammates? 🤖➕👤

Introducing Collaborative Gym (Co-Gym), a framework for enabling & evaluating human-agent collaboration! I now get used to agents proactively seeking confirmations or my deep thinking.(🧵 with video)
Reposted by Caleb Ziems
betsysneller.bsky.social
Bill Labov died this morning. I'm not coherent enough to talk about how important and influential and brilliant he was. I am very sad.

I was so lucky to know him, and I am grateful every day that he (and Gillian, and Walt, etc) built an academic field where kindness is expected.
Reposted by Caleb Ziems
williamheld.com
With an increasing number of Large *Audio* Models 🔊, which one do users like the most?

Introducing talkarena.org — an open platform where users speak to LAMs and receive text responses. Through open interaction, we focus on rankings based on user preferences rather than static benchmarks.
🧵 (1/5)
Talk Arena: Interactive Evaluation of Large Audio Models
calebziems.com
Maybe some starter packs for the Dyirbal noun classes?

1. most animate objects, men
2. women, water, fire, violence, and exceptional animals
3. edible fruit and vegetables
4. miscellaneous (includes things not classifiable in the first three)
rheinze.bsky.social
Some starter packs I plan to do when I get around to it
1. those that belong to the Emperor,
2. embalmed ones,
3. those that are trained,
4. suckling pigs,
5. mermaids,
6. fabulous ones,
7. stray dogs,
8. those included in the present classification,
9. those that tremble as if they were mad,
10. innumerable ones,
11. those drawn with a very fine camelhair brush,
12. others,
13. those that have just broken a flower vase,
14. those that from a long way off look like flies.
Reposted by Caleb Ziems
cfiesler.bsky.social
Hi Bluesky! You get to be the very first internet people to see my standup comedy debut. Because I know you’ll be nicer to me than the 12 year olds on TikTok. youtu.be/KqL2ahOvAgg?...
AI is not the GOAT. (Uh oh, your professor is attempting stand up comedy.)
YouTube video by Casey Fiesler
youtu.be
Reposted by Caleb Ziems
marcmarone.com
I noticed a lot of starter packs skewed towards faculty/industry, so I made one of just NLP & ML students: go.bsky.app/vju2ux

Students do different research, go on the job market, and recruit other students. Ping me and I'll add you!
Reposted by Caleb Ziems
mariaa.bsky.social
I'm recruiting 1-2 PhD students to work with me at the University of Colorado Boulder! Looking for creative students with interests in #NLP and #CulturalAnalytics.

Boulder is a lovely college town 30 minutes from Denver and 1 hour from Rocky Mountain National Park 😎

Apply by December 15th!
A photo of Boulder, Colorado, shot from above the university campus and looking toward the Flatirons.
Reposted by Caleb Ziems
chrisbail.bsky.social
Repost if you’ve participated in a Summer Institute in Computational Social Science. Let’s get #SICSS Bluesky going!
Reposted by Caleb Ziems
jmendelsohn2.bsky.social
I'm sharing materials from my academic job search last year! Includes research, teaching, and diversity statements, plus my UMD cover letter and job talk slides. I applied for a mix of iSchool, data sci, CS, and linguistics positions). Feel free to share!
juliamendelsohn.github.io/resources/
resources | Julia Mendelsohn
Materials that some people might find helpful
juliamendelsohn.github.io
calebziems.com
I wanted to contribute to "Starter Pack Season" with one for Stanford NLP+HCI: go.bsky.app/VZBhuJ5

Here are some other great starter packs:

- CSS: go.bsky.app/GoEyD7d + go.bsky.app/CYmRvcK
- NLP: go.bsky.app/SngwGeS + go.bsky.app/JgneRQk
- HCI: go.bsky.app/p3TLwt
- Women in AI: go.bsky.app/LaGDpqg
Reposted by Caleb Ziems
emilioferrara.bsky.social
Ready for another Computational Social Science Starter Pack?

Here is number 2! More amazing folks to follow! Many students and the next gen represented!

go.bsky.app/GoEyD7d
calebziems.com
Thanks Emilio! And thanks for compiling these
Reposted by Caleb Ziems
rtommccoy.bsky.social
🤖🧠 I'll be considering applications for postdocs & PhD students to start at Yale in Fall 2025!

If you are interested in the intersection of linguistics, cognitive science, and AI, I encourage you to apply!

Postdoc link: rtmccoy.com/prospective_...
PhD link: rtmccoy.com/prospective_...
Top: syntax tree for the sentence "the doctor by the lawyer saw the artist"
Bottom: a continuous vector
calebziems.com
I'd love to join! :)