Fateme Hashemi Chaleshtori
@fatemehc.bsky.social
330 followers 220 following 9 posts
PhD student at Utah NLP, Human-centered Interpretability, Trustworthy AI
Posts Media Videos Starter Packs
Reposted by Fateme Hashemi Chaleshtori
mtutek.bsky.social
Thrilled that FUR was accepted to @emnlpmeeting.bsky.social Main🎉

In case you can’t wait so long to hear about it in person, it will also be presented as an oral at @interplay-workshop.bsky.social @colmweb.org 🥳

FUR is a parametric test assessing whether CoTs faithfully verbalize latent reasoning.
fatemehc.bsky.social
9/ We hope BriefMe encourages more Legal NLP development that directly aids legal professionals!
Check out our paper for the full methodology, human evaluation details, and comprehensive benchmarks.

What other legal NLP applications can we design using BriefMe? 🤔
fatemehc.bsky.social
8/ ⚖️ BriefMe extends Legal NLP by introducing a dataset of legal briefs, a type of legal document that hasn't been overlooked before. We've designed tasks that attorneys actually need in their daily work, opening up new research directions to be explored to assist professionals.
fatemehc.bsky.social
7/ However, LLMs struggle with these complex tasks:
- Realistic argument completion: Llama-3.1-70B finds missing arguments only 18% of the time
- Case retrieval: Best method finds correct precedents in top-5 results just 31.4% of the time

Lots of room for improvement! 📈
fatemehc.bsky.social
6/ Surprising finding: GPT-4o outperforms human-written headings!
🤖 GPT-4o: 4.3/5 avg. LLM-as-judge rating for both arg. summ. & comp.
🤵 Lawyers: 4.0/5 (summ.) and 3.9/5 (comp.) avg. rating
LLMs excel at summarization and guided completion tasks, requiring only minor edits.
fatemehc.bsky.social
5/ Evaluating generated text is challenging: traditional metrics (BLEU/ROUGE/...) are not aligned with human preferences. Instead, we built an LLM-as-judge using o3-mini, instructed with expert-written guidelines for brief headings, proving more reliable than human raters!
fatemehc.bsky.social
4/ Our novel argument completion task tests if LLMs can identify WHERE exactly a missing argument should go in a brief's logical flow and WHAT that argument should be.
🧩 This realistic version is especially challenging: models must spot gaps in the ToCs with no guidance.
fatemehc.bsky.social
3/ We built BriefMe from Supreme Court briefs with 3 key tasks:
- Argument summarization
- Realistic/Guided Argument completion: filling in missing arguments within the Table of Contents (ToC)
- Case retrieval
Each assesses different practical aspects of legal reasoning.
fatemehc.bsky.social
2/ Legal briefs are documents where attorneys present their arguments to judges, making the case for their client's position by interpreting the law and citing relevant precedents.
Most legal NLP work focuses on judicial opinions, but we target the attorney's perspective instead 🏛️
fatemehc.bsky.social
1/ 🚨NEW PAPER: "BriefMe: A Legal NLP Benchmark for Assisting with Legal Briefs", accepted to ACL Findings 2025!
We introduce the first benchmark specifically designed to help LLMs assist lawyers in writing legal briefs 🧑‍⚖️

📄 arxiv.org/abs/2506.06619
🗂️ huggingface.co/datasets/jw4...
Reposted by Fateme Hashemi Chaleshtori
mtutek.bsky.social
It has been amazing to work with @fatemehc.bsky.social, @anamarasovic.bsky.social and Yonatan Belinkov on this incredibly important topic.

I look forward to further works on the parametric faithfulness route!

Codebase (& data): github.com/technion-cs-...
GitHub - technion-cs-nlp/parametric-faithfulness
Contribute to technion-cs-nlp/parametric-faithfulness development by creating an account on GitHub.
github.com