@justincurl.bsky.social
9 followers 73 following 10 posts
Posts Media Videos Starter Packs
Reposted
lawfaremedia.org
@justincurl.bsky.social @peterhenderson.bsky.social Kart Kandula, and Faiz Surani warn that transfering influence to unaccountable private interests through the use of AI or large language models by judges to determine ordinary meaning represents a structural incompatibility with the judicial role.
Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text
Large language models are inherently shaped by private interests, making them unreliable arbiters of language.
www.lawfaremedia.org
justincurl.bsky.social
Read more in our article published on Lawfare here: lawfaremedia.org/article/judg...

We're also planning to write a longer follow-on law review article, so share any thoughts or comments you might have! (10/10)
Judges Shouldn’t Rely on AI for the Ordinary Meaning of Text
Large language models are inherently shaped by private interests, making them unreliable arbiters of language.
lawfaremedia.org
justincurl.bsky.social
Most judges, we think, would be displeased to find their clerks taking instructions from OpenAI, regardless of whether they had shown explicit bias towards the company. (9/10)
justincurl.bsky.social
Some analogize LLMs to law clerks (which few people take serious issue with). But while clerks are vetted and employed by judges, commercial LLMs are fully controlled by the companies that create them. (8/10)
justincurl.bsky.social
What matters here is NOT the specific values chosen but that companies are selecting and enshrining values into their models at all.

Judges are supposed to interpret the law. But by consulting LLMs, they're effectively letting third parties help decide what the law means. (7/10)
justincurl.bsky.social
2. Anthropic’s early models were trained to follow the principles it selected (Constitutional AI).

3. When asked for example laws that could help guide regulation of tech companies, o3 refused to respond to queries mentioning OpenAI yet offered suggestions for Anthropic. (6/10)
justincurl.bsky.social
LLMs are built, prompted, fine-tuned, and filtered by private companies with their own agendas. For example…

1. DeepSeek refuses to answer questions related to sensitive topics in China. (5/10)
justincurl.bsky.social
Because LLMs are trained on billions of pages of text, some judges have viewed asking an LLM as a clever shortcut for finding a word's everyday meaning. But there's a catch: LLMs aren't neutral observers of language. (4/10)
justincurl.bsky.social
Why are judges consulting LLMs?

First, context: to resolve many cases, judges must decide the meaning of key words and phrases. In modern textual interpretation, words are given their “ordinary meaning,” and essentially mean whatever the average person thinks they mean (3/10)
justincurl.bsky.social
Yes, this is happening: An 11th Circuit federal judge asked LLMs to see if “landscaping” covers putting in a backyard trampoline and if threatening someone at gunpoint is “physical restraint.”

And he’s not alone. Judges across the country are citing AI in their opinions. (2/10)
justincurl.bsky.social
Should judges use LLMs like ChatGPT to determine the meaning of legal text?

Whatever your answer, it’s already happening… @peterhenderson.bsky.social, Kart Kandula, Faiz Surani, and I explain why this is a dangerous idea in a recent article for Lawfare... 🧵 (1/10)