Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind and will post about it. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Pinned
Just in time for the holidays, my final post is live.

This is a long-read, and likely of interest to anyone interested in the uses and limitations of computational modeling in cognitive science, with attention on less-noticed issues in the field. 🧵

vincentcarchidi.substack.com/p/do-languag...
Do Language Models Learn Like Human Children?
On a history of unwittingly lifting cognitive weight, with comments on a personal motivation for studying a very unusual topic.
vincentcarchidi.substack.com
Not gonna name names, but when the paper came out in 2024 arguing that LLMs can't learn "impossible languages," one of the authors posted on X: 'We always knew Chomsky was wrong, and now we can prove it!'

That's what I mean by "bullshit."
I think the second group is prone to some bullshit, in the sense of using the outrage to force an academic settlement on issues they were already settled on.

But on the politics/activism side...I think it's basically right that this is more indisputably unforgivable than his foreign policy views.
February 4, 2026 at 4:55 PM
Three categories of Chomsky reactions:

- People who admired his activism and see this as a betrayal

- People who disliked his scholarly work and see this as a vindication

- People who admired his activism and scholarly work who likely feel, uh, confused atm
spent like half my life going to the mat for Chomsky and I have been humiliated and betrayed beyond measure like actually you know what aspects WAS wack. maybe those lexical functional grammar freaks were right. I can’t do this anymore
February 4, 2026 at 4:52 PM
Reposted by Vincent Carchidi
They called it AI and now I gotta debate the definition of intelligence with my barber
February 3, 2026 at 8:13 PM
R&R. I'll take it. And the review (just one) is *quite helpful.* Love to see it.
🚨The article has switched from "Reviewers Assigned" to "Under Review"🚨

It took 7 months but it's happening.
February 3, 2026 at 10:14 PM
Gonna be a bit before the paper is done, but I do think this line of thought can fruitfully interact with usage-based linguistics. Just have a hunch that it shouldn't be ignored.
Can of worms indeed, but to skip ahead to where I think this question ends up: if our language use is controlled by internal stimuli, why do our words fit situations and find coherence with the context sensitive thoughts of others? E.g. why are my words suitable to this topic, and not another?
February 3, 2026 at 8:14 PM
Reposted by Vincent Carchidi
Everyone keeps trying to win an argument that isn’t being had, because the argument they’re actually in hasn’t been named.

Or as I like to call it, the social media experience.
February 3, 2026 at 7:56 PM
The US govt should be involved. But that aside (since we are currently withdrawing from everything), this is basically what I mean when I say the intelligence question is a distraction.

time.com/7364551/ai-i...
February 3, 2026 at 4:51 PM
Glad to see that Sky is back on the platform. Anyway
February 3, 2026 at 4:02 PM
These days, I don't have any more of a problem with someone saying an LLM is "intelligent" than if someone says their laptop "needs to breathe" as they go to remove an obstruction from its vent.
February 3, 2026 at 2:15 PM
Far from the only thing I've read recently that's given me this impression, but academic writing styles really did used to be diverse, lively. It's like I'm reading something written by someone who wants me to read it.
February 3, 2026 at 2:48 AM
Have said similar things before, but: I don't doubt Amodei believes much if not most of what he publicly says. I also don't doubt his written statements are reviewed by in-house lawyers, regulatory specialists, and the company's marketing team before release.
My attempt to take Dario Amodei's new manifesto literally and seriously: as a call for more liberal democracy written by a prime example of the ways it can be overwhelmed nymag.com/intelligence...
February 2, 2026 at 6:31 PM
Cautiously sticking with my theory that this can all be summer up as "malicious incompetence."
February 2, 2026 at 5:39 PM
NYT wants to know where AI is heading. Assembles a group where less than half of participants are actual researchers. And the rest have conflicts of interest.

www.nytimes.com/interactive/...
February 2, 2026 at 4:54 PM
You can imagine my opinion on this, but nice to see an anti-UG piece not enthralled to LLMs.

ojs.ub.uni-konstanz.de/zs/index.php...
Large Language Models: The best linguistic theory, a wrong linguistic theory, or no theory at all? | Journal of the Linguistic Society of Germany
ojs.ub.uni-konstanz.de
February 2, 2026 at 4:47 PM
Extremely early work, but interesting and will be important that a new science actually results.
February 1, 2026 at 8:40 PM
No surprise why people have an immune reaction to this sort of thing. They've been doing it from the very beginning.

www.dailymail.co.uk/news/article...
February 1, 2026 at 7:22 PM
From me, from the heart. I do not intend on making a habit of this kind of writing, but I felt I had something to say about this.

vincentcarchidi.substack.com/p/better-tha...
Better Than Chomsky
I will not make a habit of this, but a personal history is sometimes worthwhile. Here's mine.
vincentcarchidi.substack.com
February 1, 2026 at 2:32 PM
Reposted by Vincent Carchidi
I think many people see LLMs as a step along a teleological path to the kind of “powerful AI” that’s been foretold in fiction for decades, and the empirical evidence if you scrutinize it carefully suggests something different and very unintuitive.
January 31, 2026 at 8:11 PM
I have my opinions about the direction policy should head (in the US) to do this, but if we want safe, public-facing AI systems that also retain the new capabilities they offer for public benefit - not gimmicks like shopping assistants and so forth - then we need an actual science of ML.
January 31, 2026 at 6:49 PM
Somebody's gotta write this piece. Surely there's a MacIntyre-AI-head out there somewhere.
Has anyone done an Alasdair MacIntyre-esque "Post-2020 AI uses language similar to pre-2020 AI as though it's a continuation of the project, but is actually doing something radically different" thesis? I'd read it.
More than the counter-factual though, I think *we* should look at the post-2020 modeling as kind of an aberration which is being made out to be a mere continuation of older efforts. (Subtweeting an entire subfield here.)
January 31, 2026 at 6:36 PM
As I think the reaction to the Anthropic 'skill acquisition' paper shows, Claude Code did not in fact change the minds of "critics" on here about whether AI is useful or whatever. What actually happened IMHO is that the AI community entered an echo chamber on here without realizing it.
January 31, 2026 at 4:41 PM
If you go back and listen to or read Chomsky's public comments on MeToo from the time, the strangest thing about them is how uncharacteristically not talkative about it he was when asked. Basically boilerplate sympathy.
Cool thoughts from Noam Chomsky
January 31, 2026 at 2:59 PM
Reposted by Vincent Carchidi
Bring back turning shit off when it doesn’t work. We’ve grown way too tolerant of broken shitty technology that doesn’t do what it says on the label on the promise that “it will improve over time”. Improve it first and then you can turn it on.
January 30, 2026 at 11:45 PM