coffeenotsleep.bsky.social
@coffeenotsleep.bsky.social
Reposted
This is why they can't fix it: it's not broken

It's *structurally indifferent to truth*
Again, @ft.com reporters or whoever else needs to hear this:

"Hallucinations" are not the result of "flaws," they are literally inherent in & inextricable from what LLM systems do & are.

Whether an "AI" tells you something that matches reality or something that doesn't, *it is working as designed*
May 14, 2025 at 5:42 AM
Reposted
January 21, 2025 at 2:57 PM