Nishant Balepur
@nbalepur.bsky.social
130 followers 190 following 19 posts
CS PhD Student. Trying to find that dog in me at UMD. Babysitting (aligning) + Bullying (evaluating) LLMs nbalepur.github.io
Posts Media Videos Starter Packs
nbalepur.bsky.social
🎉🎉 Excited to have two papers accepted to #ACL2025!

Our first paper designs a preference training method to boost LLM personalization 🎨
While the second outlines our position on why MCQA evals are terrible and how to make them better 🙏

Grateful for amazing collaborators!
Reposted by Nishant Balepur
lasha.bsky.social
Want to know what training data has been memorized by models like GPT-4?

We propose information-guided probes, a method to uncover memorization evidence in *completely black-box* models,

without requiring access to
🙅‍♀️ Model weights
🙅‍♀️ Training data
🙅‍♀️ Token probabilities 🧵 (1/5)
Information-Guided Identification of Training Data Imprint in (Proprietary) Large Language Models
High-quality training data has proven crucial for developing performant large language models (LLMs). However, commercial LLM providers disclose few, if any, details about the data used for training. ...
arxiv.org
Reposted by Nishant Balepur
heuser.bsky.social
Finally may have figured out why LLMs rhyme so compulsively: instruction-tuning. Training an LLM to respond "helpfully" to user queries may push models into more "pleasing" aesthetic forms.
Graph showing that simple text completion models more accurately imitate the unrhymed form of C20 verse, whereas instruction-tuned models lapse into rhyme more often. 

Caption to graph: Given the first 5 lines of 10-20 line poems from poets born in each century, 1600-2000, LLMs are prompted to "complete" the poem. Rhyme is measured by exact phoneme match in the rime of the final syllable (or syllables, if final syllable unstressed). Poems randomly sampled from Chadwyck-Healey poetry collections, with 600 poems for each model for each century. Results shown for actual poems as well as the LLM imitations. Poems "memorized" by the model are excluded.
nbalepur.bsky.social
Had a great time presenting my research on building more helpful QA systems @imperialcollegeldn.bsky.social! Thank you @joestacey.bsky.social for letting me invite myself 🫶

And loved visiting London+Edinburgh this week, hope to be back soon! 🙏
nbalepur.bsky.social
🚨 Our team at UMD is looking for participants to study how #LLM agent plans can help you answer complex questions

💰 $1 per question
🏆 Top-3 fastest + most accurate win $50
⏳ Questions take ~3 min => $20/hr+

Click here to sign up (please join, reposts appreciated 🙏): preferences.umiacs.umd.edu
Reposted by Nishant Balepur
nbalepur.bsky.social
🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧵
nbalepur.bsky.social
if it is truly helpful, honest, and harmless, yes 🙏
nbalepur.bsky.social
The alignment is a system prompt saying "if the user asks X, do Y" 😝
Reposted by Nishant Balepur
chautmpham.bsky.social
⚠️Current methods for generating instruction-following data fall short for long-range reasoning tasks like narrative claim verification.

We present CLIPPER ✂️, a compression-based pipeline that produces grounded instructions for ~$0.5 each, 34x cheaper than human annotations.
nbalepur.bsky.social
And huge thanks to my friends and labmates who let me bother them to find the right people, review the paper, and for useful discussions 🙏
@saxon.me @lasha.bsky.social @yysung.bsky.social @maharshigor.bsky.social @matthewshu.com @houyu0930.bsky.social

(and many more I'm forgetting, sorry!)
nbalepur.bsky.social
This was a really fun paper to put together with Rachel and @boydgraber.bsky.social allowing me to vent many of my frustrations working with MCQA over the past year 😪🫡

Please check out the paper, we would love to hear your feedback! 📄👇
nbalepur.bsky.social
In short, here’s how to build better evals:
✅ Check if MCQA the right format for what you want to test
✅ Use design choices to limit leakage/errors/shortcuts
✅ Keep questions easy for humans, hard for models

If we don’t put in this effort, what is MCQA even testing? 🤷‍♂️
nbalepur.bsky.social
Lastly, we discuss persistent flaws of LLMs when running MCQA:
🔩Robustness Issues
🌎 Biases
💬 Unfaithful Explanations

Many of our previous solutions to MCQA's format/datasets can better address or evaluate these issues 😁
nbalepur.bsky.social
Two of the most pressing and promising dataset improvements include:
📋 Writing MCQs using educators' rubrics to improve question quality
🧑‍🎓 Designing MCQs hard for models but easy for humans (adversarial), rather than creating needlessly impossible/obscure questions
nbalepur.bsky.social
Next, we show even when MCQA is a good format, our datasets still have issues 🥲

We discuss:
🔓 Dataset Leakage
❓ Unanswerable Questions
⚡️ Shortcuts
📈 Saturation

More good news: educators again already have solutions! We also discuss recent work tackling these problems! 💪
nbalepur.bsky.social
So what's better? ❤️‍🩹

We explore two possible improvements:
1️⃣ Constructed Response (short-form QA)
2️⃣ Explanation MCQA (justifying answers)

Both are grounded in education research, better align with LLM use cases, and test deeper knowledge levels versus MCQA ⭐️
nbalepur.bsky.social
First, we show MCQA is flawed as a standardized LLM eval format because it often fails to:
🔒 Test subjectivity and generation
👥 Align with real LLM use cases
🧠 Assess knowledge (based on education research)

When's the last time you asked ChatGPT to answer an MCQ? 🤔
nbalepur.bsky.social
We break our position into three points:
1️⃣ Flaws in MCQA’s format
2️⃣ Issues in datasets
3️⃣ Weaknesses in how LLMs run MCQA

The good news? Best practices in education made for effective student testing can help fix these 🧑‍🏫

Yet, we rarely use these insights in LLM evaluation 🤦
nbalepur.bsky.social
🚨 New Position Paper 🚨

Multiple choice evals for LLMs are simple and popular, but we know they are awful 😬

We complain they're full of errors, saturated, and test nothing meaningful, so why do we still use them? 🫠

Here's why MCQA evals are broken, and how to fix them 🧵
nbalepur.bsky.social
Namely, @boydgraber.bsky.social @lasha.bsky.social, Rachel, Feng, and folks from Adobe Research 🫡
nbalepur.bsky.social
Excited to share 2 papers at #NAACL2025 main!

📄✍️ MoDS: Multi-Doc Summarization for Debatable Queries (Adobe intern work, coming soon!)
🤔❓Reverse QA: LLMs struggle with the simple task of giving questions for answers

Grateful for all my collaborators 😁
Reposted by Nishant Balepur
jennarussell.bsky.social
People often claim they know when ChatGPT wrote something, but are they as accurate as they think?

Turns out that while general population is unreliable, those who frequently use ChatGPT for writing tasks can spot even "humanized" AI-generated text with near-perfect accuracy 🎯
nbalepur.bsky.social
Manifesting some good luck for my experiment running tonight 🤞

Best of luck to anyone submitting tmrw :)
Reposted by Nishant Balepur
umd-lsc.bsky.social
Exciting research on an AI-driven mnemonic generator for easier vocabulary memorization by @nbalepur.bsky.social, Jordan Boyd-Graber, Rachel Rudinger, & @alexanderhoyle.bsky.social. Part of 21 CLIP projects at #EMNLP2024. 👉 Read more: go.umd.edu/1u48 #AI