#LLLMs
I understand the accessibility importance of alt text, but t this point why aren’t we just using lllms to provide it? They are pretty good at that type of task. Seems like they would be great for screen readers for that as well for follow up questions about an image.
October 21, 2025 at 1:30 AM Everybody can reply
1 likes
the very existence of LLLMs is a crime against humanity and needs to be treated as such.
October 18, 2025 at 3:20 PM Everybody can reply
the nazis hated lllms too, people
think about what you're doing
i hope she doesn't block me, this is amazing content
October 1, 2025 at 5:49 PM Everybody can reply
I agree. The llm writing style feels horrible. But I wonder whether lllms will get better and lose their characteristic style. Worrying...
October 1, 2025 at 7:14 AM Everybody can reply
1 likes
Will Bain is another one who doesn't understand how LLLMs work.
They absolutely do not learn from users. It's a mistake many people make but one would hope that a Radio 4 bod interviewing someone about chatbots would understand this.

#r4today
August 26, 2025 at 7:03 AM Everybody can reply
1 likes
Ody
Dear tech companies.

How many times do we need to scream at you that your LLLms are absolute garbage and that we don't want them?
August 23, 2025 at 6:52 PM Everybody can reply
25 likes
This is the most charitable position on lllms that I am willing to entertain. I have yet to see a net benefit compared to other means of learning, however
danabra.mov dan @danabra.mov · Aug 16
learning from a chatbot is definitely possible (i am using it for learning) but it requires approaching it with an adversarial mindset and only works in fields where you can verify the result
You don’t “learn” from a chatbot. You consume. There’s a difference.

They’re spewing slop and people are gobbling it up (and eroding their intelligence in the process).

Misinformation is everywhere. Please don’t rely on a chatbot for anything important.

They can’t replace real human connection
August 16, 2025 at 7:41 PM Everybody can reply
1 likes
it makes me laugh, bitterly, that the uk government is banning wikipedia whilst going all in on fucking lllms
August 12, 2025 at 8:47 AM Everybody can reply
3 reposts 15 likes
Yea we maybe don’t know much about human cognition but to extent we do the current AI folks seem intent on ignoring it in place of pretending lllms are almighty.

I’ll listen to ai folks when they say they are working on error detection systems rooted in reality to correct llms inherent limitations
have a friend who does research on synaptic pruning & (skeptically) asked if llms offer anything for brain modeling: “sure! Brain has lots of predictive systems, just has error correcting mechanisms too” boosters want so badly to “llms all the way down” out of the error correction bit and you can’t!
So let me just state in advance, and in all caps:

WE AREN’T HOLDING THIS TECHNOLOGY TO SOME ARTIFICIAL, IMPOSSIBLE STANDARD. WE ARE JUST ASKING WHETHER IT DOES THE THINGS THAT YOU BOOSTERS LOUDLY INSISTED IT WOULD DO. WE ARE HOLDING IT TO THE STANDARDS *YOU* SET OUT.
August 11, 2025 at 11:39 AM Everybody can reply
5 likes
what if we had lllms write gofai
July 21, 2025 at 11:31 PM Everybody can reply
2 likes
This is a perfect public demonstration of (some of) the critical shortcomings with Artificial Intelligence Large Language Learning Models (or LLLMs).
July 12, 2025 at 5:40 PM Everybody can reply
2 likes
waiting eagerly for when electronics containing chips made with lllms hit the market. the value of preexisting electronics will go through the roof and i can sell my stuff to retire in peace
June 27, 2025 at 7:18 PM Everybody can reply
I swear I've had almost the exact conversation with these lllms.
June 3, 2025 at 11:40 PM Everybody can reply
1 likes
Immigration wasn’t a feminist or tech issue, we talk about LLLMs and , the “good guys” are iffy because usually a white woman whose in charge of “speaking and reporting” on POC

couldn’t imagine this

We were asking questions about this in 2016 and Basquiat painted about its predecessor in 1967
May 29, 2025 at 11:17 AM Everybody can reply
5 reposts 25 likes
Aida Kostikova, Zhipin Wang, Deidamea Bajri, Ole P\"utz, Benjamin Paa{\ss}en, Steffen Eger
LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models
https://arxiv.org/abs/2505.19240
May 27, 2025 at 2:09 PM Everybody can reply
LLM-based classification, validated against expert labels, and topic clustering (via two approaches, HDBSCAN+BERTopic and LlooM). We find that LLM-related research increases over fivefold in ACL and fourfold in arXiv. Since 2022, LLLMs research grows [3/6 of https://arxiv.org/abs/2505.19240v1]
May 27, 2025 at 6:22 AM Everybody can reply
survey, we conduct a data-driven, semi-automated review of research on limitations of LLM (LLLMs) from 2022 to 2024 using a bottom-up approach. From a corpus of 250,000 ACL and arXiv papers, we identify 14,648 relevant papers using keyword filtering, [2/6 of https://arxiv.org/abs/2505.19240v1]
May 27, 2025 at 6:22 AM Everybody can reply
Aida Kostikova, Zhipin Wang, Deidamea Bajri, Ole P\"utz, Benjamin Paa{\ss}en, Steffen Eger: LLLMs: A Data-Driven Survey of Evolving Research on Limitations of Large Language Models https://arxiv.org/abs/2505.19240 https://arxiv.org/pdf/2505.19240 https://arxiv.org/html/2505.19240
May 27, 2025 at 6:22 AM Everybody can reply
2 reposts 1 quotes
He does, mentioning that SO had moderation changes in 2014, making it quite hostile. That was his experience, but it's also been mine.

The article makes it clear that the decline began long before lllms
May 24, 2025 at 4:35 PM Everybody can reply
Wah
When confronted with new information that shows that they are wrong, I'm going to go ahead and tell you that lllms are wildly better at accepting it than most human beings.
May 20, 2025 at 8:22 PM Everybody can reply
Yes LLLMs are bad but you tell me now that you wouldn't go see this movie on opening night
May 16, 2025 at 4:41 AM Everybody can reply
1 reposts 2 likes
Yeah it's frustrating, how people treat lllms as gospel or at least interesting while dreams are fucking amazing experiences, but just so difficult to describe.

That the basic human response to both is utter boredom is a great pity.
May 9, 2025 at 8:09 PM Everybody can reply
1 likes
This is a great assessment. I think people don’t understand hallucination rates because they think of lllms as an encyclopedia of information when in reality they are more of “decision engines” that can be provided the means to go out and find the relevant information rather than from memory.
This article is now predictably popular on Bsky but key parts of it are, well, hallucinations.

They try to claim "errors are rising" on "new reasoning systems" primarily based on Vectara hallucination leaderboard and one OpenAI document. So let's look what those actually show.
The newest and most powerful A.I. technologies — so-called reasoning systems from companies like OpenAI, Google and the Chinese start-up DeepSeek — are generating more errors, not fewer. As their math skills have notably improved, their handle on facts has gotten shakier.
May 6, 2025 at 9:24 PM Everybody can reply
1 likes