Neel Bhandari
@neelbhandari.bsky.social
50 followers 85 following 12 posts
Masters Student @LTIatCMU | ML Scientist @PayPal | Open Research @CohereForAI Community | Previously External Research Student @MITIBMLab. Views my own.
Posts Media Videos Starter Packs
Reposted by Neel Bhandari
akariasai.bsky.social
Real user queries often look different from the clean, concise ones in academic benchmarks - ambiguity, full of typos, and much less readable.
We show that even strong RAG systems quickly break under these conditions.
Awesome project led by
@neelbhandari.bsky.social and @tianyucao.bsky.social!!
neelbhandari.bsky.social
1/🚨 𝗡𝗲𝘄 𝗽𝗮𝗽𝗲𝗿 𝗮𝗹𝗲𝗿𝘁 🚨
RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style?

We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline 🧵
Reposted by Neel Bhandari
akhilayerukola.bsky.social
These days RAG systems have gotten popular for boosting LLMs—but they're brittle💔. Minor shifts in phrasing (✍️ style, politeness, typos) can wreck the pipeline. Even advanced components don’t fix the issue.

Check out this extensive eval by @neelbhandari.bsky.social and @tianyucao.bsky.social!
neelbhandari.bsky.social
1/🚨 𝗡𝗲𝘄 𝗽𝗮𝗽𝗲𝗿 𝗮𝗹𝗲𝗿𝘁 🚨
RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style?

We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline 🧵
neelbhandari.bsky.social
11/ This paper has been an incredible effort across institutions @ltiatcmu.bsky.social @uwcse.bsky.social. Huge thanks to my co-first author @tianyucao.bsky.social and co-authors @akhilayerukola.bsky.social @akariasai.bsky.social @maartensap.bsky.social ✨🚀
neelbhandari.bsky.social
10/ 📜 Paper: "Out of Style: RAG’s Fragility to Linguistic Variation": arxiv.org/abs/2504.08231
🔬 Code: github.com/Springcty/RA...

Read our paper for more details on impact of scaling retrieved documents, specific effects of each linguistic variation on RAG pipelines and much more!
Out of Style: RAG's Fragility to Linguistic Variation
Despite the impressive performance of Retrieval-augmented Generation (RAG) systems across various NLP benchmarks, their robustness in handling real-world user-LLM interaction queries remains largely u...
arxiv.org
neelbhandari.bsky.social
9/ 🚨 Takeaway
RAG systems suffer major performance drops from simple linguistic variations.

Advanced techniques offer temporary relief, but real robustness demands fundamental changes - more resilient components and fewer cascading error in order to serve all users effectively.
neelbhandari.bsky.social
8/🛠️ Adding advanced techniques to vanilla RAG improve robustness... sometimes🫠
✅ Reranking improves performance on linguistic rewrites, but gaps in performance with original queries remain.
⚠️ HyDE helps rewritten queries but hurts original queries-creating a false sense of robustness
neelbhandari.bsky.social
7/🤔Well, maybe scaling generation model size helps?

Scaling up LLM size helps narrow the performance gap between original and rewritten queries. However, this is not consistent across variations. Larger models occasionally worsen the impact, particularly with RTT variations.
neelbhandari.bsky.social
6/⚖️ RAG is more fragile than LLM-only setups

RAG’s retrieval-generation pipeline amplifies linguistic errors, leading to greater performance drops. On PopQA, RAG degrades by 23% vs. just 11% for the LLM-only setup.

⚠️The main culprit? Retrieval emerges as the weakest link
neelbhandari.bsky.social
5/🧩 Generation Fragility

Linguistic variations lead to generation accuracy drops-Exact Match score down by up to ~41%, Answer Match score by up to ~17%.

Structural changes from RTT are particularly damaging, significantly reducing response accuracy.
neelbhandari.bsky.social
4/📌Retrieval Robustness

Retrieval recall plummets up to 40.41% due to linguistic variations, especially when exposed to informal queries. Grammatical errors like RTT and typos notably degrade performance, highlighting retrievers’ sensitivity to a number of linguistic variations
neelbhandari.bsky.social
3/ We evaluate across an extensive experimental setup:⁣
🧲 2 Retrievers (Contriever, ModernBERT)⁣
🤖 9 open LLMs (3B–72B)⁣
📚 4 QA datasets (MS MARCO, PopQA, Natural Questions, EntityQuestions)⁣
🔁 Over 50K+ linguistically varied queries per dataset
neelbhandari.bsky.social
2/🔍 We evaluated RAG robustness against four common linguistic variations:
✍️ Lower formality
📉 Lower readability
🙂 Increased politeness
🔤 Grammatical errors (from typos & from round-trip translations (RTT))
neelbhandari.bsky.social
1/🚨 𝗡𝗲𝘄 𝗽𝗮𝗽𝗲𝗿 𝗮𝗹𝗲𝗿𝘁 🚨
RAG systems excel on academic benchmarks - but are they robust to variations in linguistic style?

We find RAG systems are brittle. Small shifts in phrasing trigger cascading errors, driven by the complexity of the RAG pipeline 🧵
Reposted by Neel Bhandari
mauraquint.bsky.social
it's so important to make time for yourself, rest, treat yourself gently and with kind words. you, not me, I have to run myself ragged until I collapse in a pile of exhausted self-hatred but you should definitely self care.
neelbhandari.bsky.social
Not at all. I just hope the wake up call happens by the end of the month, with the help of a stern winter wind.