conputer dipshit
banner
davidcrespo.bsky.social
conputer dipshit
@davidcrespo.bsky.social
web dev + hot dad. enjoy charts, unions, conputer games, philosophy. chicago crespo.business
I almost want to go the other way — a year ago it was a convenient excuse to pretend LLMs can never do anything. now it's essentially impossible to pretend that, so the term is if anything less misleading
January 29, 2026 at 8:19 PM
god I would love this. a collection of misunderstandings and a real time feed of people expressing them
January 29, 2026 at 8:04 PM
lmao that is not even much slower than the LLMs. I don't see how it's possible though without major help in the harness
January 29, 2026 at 8:02 PM
I think this is a reasonable defense, it is really is what the vast majority of people think it means
people say all the time that LLMs produce a “statistical average” of their training data. this is so ill-specified as to be not even really wrong — at best you can say they are producing the most likely output *given the input*. so if the input is weird, the output is also weird
January 29, 2026 at 8:01 PM
now I want to see a parrot try to beat pokemon blue
January 29, 2026 at 7:52 PM
literally the only thing Siri can do reliably
January 29, 2026 at 7:51 PM
yeah I do kinda think the set of active anti posters has shrunk by more than half in the past couple of months. not everyone is posting about their conversion experience but they've gone quieter
January 29, 2026 at 7:37 PM
yeah, I will think about what that might mean in practice
January 29, 2026 at 7:35 PM
on bluesky or in the real world? I think in the real world (in the US at least) the proportion of devs using at least some shitty autocomplete is probably well over half. not sure about proper agents
January 29, 2026 at 7:18 PM
in this case it sounds like they were trying to classify whether someone had been a law enforcement officer based on their resume. that seems like a dubious enterprise, but to the extent a human being could do it, an LLM could definitely do it
January 29, 2026 at 7:15 PM
this is already what happened in the past month
January 29, 2026 at 6:54 PM
that was a little confusing. maybe a more straightforward way to put it is that I feel the Benderian perspective always wants to simultaneously argue that the claim is empirically falsifiable but then when presented with falsifying evidence retreat into a non-falsifiable version of the claim. hence:
prof. emily bender is walking down the sidewalk. I say “look, here is a non-trivial true argument produced by an LLM”. she says: that can’t be a non-trivial true argument because LLMs do not engage in the sorts of cognitive processes that produce non-trivial true arguments
January 29, 2026 at 6:47 PM
if the response is "sure, but the premise that you could do that is wrong because these systems aren't capable of that" — these arguments are proven wrong empirically on a daily basis by systems that can in fact do that. when they are, does that mean the problem wasn't there in the first place?
January 29, 2026 at 6:43 PM
this always comes up: is the real problem the reliability or lack thereof? like let's say I made a version of that stupid system that uses an LLM and works more or less as desired (say: human-level classification). what does that mean for the alleged problem of language and meaning?
January 29, 2026 at 6:43 PM
I don't know, to me the error of thinking the presence of a single word means something it doesn't isn't really the error of believing language and meaning are interchangeable. like if the system they built actually used a good LLM it would be pretty good, probably as good as a human or better!
January 29, 2026 at 6:43 PM
I would probably agree that all the problem situations you have in mind are real problems, but might disagree that failing to distinguish language from meaning is a significant cause or explanation of the problem
January 29, 2026 at 6:39 PM
oh heh you saw it
I’m retweeting and following this guy to see who comes out right in the end.

Will there be an anti-AI left? I think it’s inevitable. There’s an anti-vax left.
this is a bluesky bubble opinion. 800M+ people use chatgpt every week. I hate Facebook and Instagram and I wish they didn’t exist, but 3B and 2B people use them every month, respectively. it’s very hard to picture a mass or even a niche left backlash against them
January 29, 2026 at 6:36 PM
I'd say bluesky was overwhelmingly hostile to almost any LLM-related post until two or three months ago. I had a mildly viral post calling it an anti-LLM bubble
this is a bluesky bubble opinion. 800M+ people use chatgpt every week. I hate Facebook and Instagram and I wish they didn’t exist, but 3B and 2B people use them every month, respectively. it’s very hard to picture a mass or even a niche left backlash against them
i think tech and ai positive people are going to be in for a big shock when anti tech and anti ai sentiment becomes a major part of leftwing politics going forward especially as datacenters continue to destroy communities and raise electricity bills
January 29, 2026 at 6:36 PM
wild!
January 29, 2026 at 6:33 PM
to me the much bigger "language divorced from meaning" problem that article brings up is the article itself and the way they were able to shift blame to AI
January 29, 2026 at 5:29 PM
that news story is not a very good example because from the vague description is sounds like the system just flagged the word "officer" which is precisely what large language models do NOT do
January 29, 2026 at 5:28 PM
on the other hand I don't think my position assumes that words have one meaning or whatever — it's more like: meaning inheres in arbitrarily large arrangements of words, individual words are the limit case
January 29, 2026 at 5:23 PM
thanks, this is helpful. my immediate reaction is that it seems strange to think of the problem there as language divorced from meaning, but on reflection I can see that my framing above is sort of asking for this because I'm saying precisely that language can hold more of this meaning in itself
January 29, 2026 at 5:22 PM