Xiaoyan Bai
banner
elenal3ai.bsky.social
Xiaoyan Bai
@elenal3ai.bsky.social
PhD @UChicagoCS / BE in CS @Umich / ✨AI/NLP transparency and interpretability/📷🎨photography painting
Pinned
Will be at #NeurIPS2025 presenting “Concept Incongruence”!

🦄🦆 Curious about a unicorn duck? Stop by, get one, and chat with us!

We made a new demo for detecting hidden conflicts in system prompts to spot “concept incongruence” for safer prompts.

🔗: github.com/ChicagoHAI/d...

🗓️ Dec 3 11AM - 2PM
Will be at #NeurIPS2025 presenting “Concept Incongruence”!

🦄🦆 Curious about a unicorn duck? Stop by, get one, and chat with us!

We made a new demo for detecting hidden conflicts in system prompts to spot “concept incongruence” for safer prompts.

🔗: github.com/ChicagoHAI/d...

🗓️ Dec 3 11AM - 2PM
November 24, 2025 at 7:18 PM
Research agents are getting smarter. They can write convincing PhD-level reports 🧑‍🔬

But has anyone checked if the way they find their results makes any sense?

Our framework, MechEvalAgents, verifies the science, not just the story 🤖

1/n🧵
November 20, 2025 at 9:46 PM
Reposted by Xiaoyan Bai
We're launching a weekly competition where the community decides which research ideas get implemented. Every week, we'll take the top 3 ideas from IdeaHub, run experiments with AI agents, and share everything: code, successes, and failures.

It's completely free and we'll try out ideas for you!
November 10, 2025 at 9:32 PM
Reposted by Xiaoyan Bai
Identifying human morals and values in language is crucial for analysing lots of human- and AI-generated text.

We introduce "MoVa: Towards Generalizable Classification of Human Morals and Values" - to be presented at @emnlpmeeting.bsky.social oral session next Thu #CompSocialScience #LLMs
🧵 (1/n)
October 30, 2025 at 12:20 AM
🕸️ Here’s a network showing how much different models predict each other as the author of some text!
October 28, 2025 at 1:55 AM
❓ Does an LLM know thyself? 🪞
Humans pass the mirror test at ~18 months 👶
But what about LLMs? Can they recognize their own writing—or even admit authorship at all?
In our new paper, we put 10 state-of-the-art models to the test. Read on 👇
1/n 🧵
October 27, 2025 at 5:36 PM
In our new work, we reverse-engineer two models: a standard fine-tuned (SFT), and an implicit chain-of-thought (ICoT) model to see why models struggle with multi-digit multiplication.

👉Check out the paper here: arxiv.org/abs/2510.00184
🎉Big thanks to all my amazing collaborators!
October 24, 2025 at 7:04 PM
Reposted by Xiaoyan Bai
AI can accelerate scientific discovery, but only if we get the scientist–AI interaction right.

The dream of “autonomous AI scientists” is tempting:
machines that generate hypotheses, run experiments, and write papers. But science isn’t just automation.

cichicago.substack.com/p/the-mirage...
🧵
The Mirage of Autonomous AI Scientists
Science as AI’s killer application cannot succeed without scientist-AI interaction: Introducing Hypogenic.ai.
cichicago.substack.com
October 23, 2025 at 6:55 PM
Reposted by Xiaoyan Bai
HR Simulator™: a game where you gaslight, deflect, and “let’s circle back” your way to victory.
Every email a boss fight, every “per my last message” a critical hit… or maybe you just overplayed your hand 🫠
Can you earn Enlightened Bureaucrat status?

(link below!)
September 26, 2025 at 6:41 PM
Reposted by Xiaoyan Bai
🚀 We’re thrilled to announce the upcoming AI & Scientific Discovery online seminar! We have an amazing lineup of speakers.

This series will dive into how AI is accelerating research, enabling breakthroughs, and shaping the future of research across disciplines.

ai-scientific-discovery.github.io
September 25, 2025 at 6:28 PM
Reposted by Xiaoyan Bai
As AI becomes increasingly capable of conducting analyses and following instructions, my prediction is that the role of scientists will increasingly focus on identifying and selecting important problems to work on ("selector"), and effectively evaluating analyses performed by AI ("evaluator").
September 16, 2025 at 3:07 PM
Reposted by Xiaoyan Bai
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL. The workshop will explore how AI can advance scientific discovery. Please use this Google form to indicate your interest (corrected link):

forms.gle/MFcdKYnckNno...

More in the 🧵! Please share! #MLSky 🧠
Program Committee Interest for the Second Workshop on AI & Scientific Discovery
We are proposing the second workshop on AI & Scientific Discovery at EACL/ACL (Annual meetings of The Association for Computational Linguistics, the European Language Resource Association and Internat...
forms.gle
August 29, 2025 at 4:00 PM
⚡️Ever asked an LLM-as-Marilyn Monroe about the 2020 election? Our paper calls this concept incongruence, common in both AI and how humans create and reason.
🧠Read my blog to learn what we found, why it matters for AI safety and creativity, and what's next: cichicago.substack.com/p/concept-in...
July 31, 2025 at 7:06 PM
Reposted by Xiaoyan Bai
Prompting is our most successful tool for exploring LLMs, but the term evokes eye-rolls and grimaces from scientists. Why? Because prompting as scientific inquiry has become conflated with prompt engineering.

This is holding us back. 🧵and new paper with @ari-holtzman.bsky.social .
July 9, 2025 at 8:07 PM
Reposted by Xiaoyan Bai
When you walk into the ER, you could get a doc:
1. Fresh from a week of not working
2. Tired from working too many shifts

@oziadias.bsky.social has been both and thinks that they're different! But can you tell from their notes? Yes we can! Paper @natcomms.nature.com www.nature.com/articles/s41...
July 2, 2025 at 7:22 PM
Humbled to receive an honorable mention🌟
Congratulations to all best poster awards and honorable mentions!
June 25, 2025 at 8:56 AM
Reposted by Xiaoyan Bai
Since @elenal3ai.bsky.social cannot make it, I presented the poster on concept incongruence: arxiv.org/abs/2505.14905
June 23, 2025 at 7:18 PM
I am glad that you found our paper entertaining! This is a great point for my follow-up thread on the implications of concept incongruence. Our main goal is to raise awareness and provide clarity around concept incongruence.
Highly entertaining paper and writeup, but does it really matter? Is it important that models can't abstain on counterfactuals?
Or that the leak information?
🚨 New paper alert 🚨

Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️

1/n 🧵
May 28, 2025 at 12:56 PM
🚨 New paper alert 🚨

Ever asked an LLM-as-Marilyn Monroe who the US president was in 2000? 🤔 Should the LLM answer at all? We call these clashes Concept Incongruence. Read on! ⬇️

1/n 🧵
May 27, 2025 at 1:59 PM
Reposted by Xiaoyan Bai
🧑‍⚖️How well can LLMs summarize complex legal documents? And can we use LLMs to evaluate?

Excited to be in Albuquerque presenting our paper this afternoon at @naaclmeeting 2025!
May 1, 2025 at 7:25 PM
Reposted by Xiaoyan Bai
🚀🚀🚀Excited to share our latest work: HypoBench, a systematic benchmark for evaluating LLM-based hypothesis generation methods!

There is much excitement about leveraging LLMs for scientific hypothesis generation, but principled evaluations are missing - let’s dive into HypoBench together.
April 28, 2025 at 7:35 PM
Reposted by Xiaoyan Bai
Encourage your students to submit posters and register! Limited free housing is provided for student participants only, on a first-come (i.e., request)-first-serve basis.

We are also actively looking for sponsors. Reach out if you are interested!

Please repost! Help spread the words!
The Midwest Machine Learning Symposium will happen in Chicago on June 23-4 on the University of Chicago campus (midwest-ml.org/2025/). We have an amazing lineup of speakers:@profsanjeevarora.bsky.social from Princeton, Heng Ji from UIUC, Tuomas Sandholm from CMU, @ravenben.bsky.social from UChicago.
April 21, 2025 at 3:12 PM
Reposted by Xiaoyan Bai
1/n

You may know that large language models (LLMs) can be biased in their decision-making, but ever wondered how those biases are encoded internally and whether we can surgically remove them?
April 14, 2025 at 7:55 PM
Reposted by Xiaoyan Bai
New preprint!
Metaphors shape how people understand politics, but measuring them (& their real-world effects) is hard.

We develop a new method to measure metaphor & use it to study dehumanizing metaphor in 400K immigration tweets Link: bit.ly/4i3PGm3

#NLP #NLProc #polisky #polcom #compsocialsci
🐦🐦
February 20, 2025 at 7:59 PM