Hailey Joren
@haileyjoren.bsky.social
460 followers 77 following 8 posts
PhD Student @ UC San Diego Researching reliable, interpretable, and human-aligned ML/AI
Posts Media Videos Starter Packs
Reposted by Hailey Joren
scheon.com
Denied a loan, an interview, or an insurance claim by machine learning models? You may be entitled to a list of reasons.

In our latest w @anniewernerfelt.bsky.social @berkustun.bsky.social @friedler.net, we show how existing explanation frameworks fail and present an alternative for recourse
haileyjoren.bsky.social
Our work suggests that solving RAG hallucination problems requires moving beyond just improving retrieval—we need models that can accurately determine when retrieved information suffices for answering and abstain when appropriate confidence thresholds aren't met.
haileyjoren.bsky.social
Building on these insights, we developed a selective generation framework using both sufficient context signals and model confidence to decide when to respond vs. abstain—improving accuracy of responses by 2-10% for Gemini, GPT, and Gemma.
Line graph comparing selective generation methods showing coverage vs. accuracy trade-offs. Purple lines (sufficient context + confidence) outperform gray lines (confidence only), especially for HotpotQA dataset and Gemini model. Diagram of the Selective Generation Pipeline. The workflow shows how Input Query and Input Context feed into both Self-reported model confidence (gray box) and Sufficient Context AutoRater label (purple box). These signals combine in a Logistic regression model, which produces a score. This score is compared against a Threshold determined by Desired coverage. Depending on the comparison, the system either proceeds with the Model Response (green box) or chooses to Abstain (blue box).
haileyjoren.bsky.social
Intriguingly, models sometimes generate correct answers despite insufficient context. We taxonomize these cases: parametric knowledge bridging information gaps, yes/no questions with 50% chance of correctness, and instances where the context provides partial reasoning paths.
Table categorizing cases where models correctly answer questions despite insufficient context, including yes/no questions, limited choice questions, multi-hop fragments, partial information, and cases where parametric knowledge bridges gaps.
haileyjoren.bsky.social
We analyzed standard QA datasets through our sufficient context lens and found a surprising percentage lack sufficient information: ~56% for Musique, ~56% for HotpotQA, and ~23% for FreshQA. This highlights the magnitude of the information retrieval challenge.
Bar graph showing percentage of instances with sufficient context across datasets. FreshQA has highest sufficient context (77%), while HotpotQA and Musique have around 44-45% sufficient context.
haileyjoren.bsky.social
Conversely, smaller models (Mistral 3, Gemma 2) struggle even with sufficient context—either hallucinating or failing to extract answers from the provided information. Neither approach solves the fundamental RAG reliability challenge.
haileyjoren.bsky.social
A major finding: When context is sufficient, larger models (Gemini 1.5 Pro, GPT-4o, Claude 3.5) excel. But when it's insufficient, they're more likely to hallucinate than abstain—presenting incorrect answers with high confidence.
Bar chart comparing model performance on datasets stratified by sufficient context. Graph shows that larger models (Gemini, GPT, Claude) perform better with sufficient context but still hallucinate with insufficient context, while smaller models (Gemma) struggle across conditions.
haileyjoren.bsky.social
When RAG systems hallucinate, is the LLM misusing available information or is the retrieved context insufficient? In our #ICLR2025 paper, we introduce "sufficient context" to disentangle these failure modes. Work w Jianyi Zhang, Chun-Sung Ferng, Da-Cheng Juan, Ankur Taly, @cyroid.bsky.social