Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind and will post about it. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Pinned
Just in time for the holidays, my final post is live.

This is a long-read, and likely of interest to anyone interested in the uses and limitations of computational modeling in cognitive science, with attention on less-noticed issues in the field. 🧵

vincentcarchidi.substack.com/p/do-languag...
Do Language Models Learn Like Human Children?
On a history of unwittingly lifting cognitive weight, with comments on a personal motivation for studying a very unusual topic.
vincentcarchidi.substack.com
Ahh, an opportunity to post my favorite @dicknixon.bsky.social quote about books that are written so that readers don't actually have to think about them:
February 8, 2026 at 4:11 PM
The AP piece from a couple weeks ago is worth a read for a good snapshot of AI adoption in the US.

One of my takeaways: the notion of having a capable/eager intern available at a moment's notice seems like it's resolving a major bottleneck, but I think that's turning out to be highly case-specific.
Recommend taking a look at this survey recently. Adoption has gone up, but outside of data-heavy jobs, my reading of this is that the uses are pretty limited, if frequent, often revolving around forms of search. Tasks like coding are looking unrepresentative.

apnews.com/article/ai-w...
How Americans are using AI at work, according to a new Gallup poll
A new Gallup poll finds that American workers have adopted artificial intelligence into their work lives at a remarkable pace over the past few years.
apnews.com
February 8, 2026 at 3:33 PM
Feel free to interpret the last 5-7 min. as you please. I thought this was interesting given that it's applying the concerns about complexity and reliability in specific domains that sort of lagged after ChatGPT-3.5, but seems to be here pretty quickly for Genie.

www.youtube.com/watch?v=Xsae...
Is The Gaming Industry COOKED?
YouTube video by gameranx
www.youtube.com
February 8, 2026 at 1:44 AM
I have no idea if this piece will be accepted (I never know), but god damn it I love finding out how I was wrong about something I didn't even realize I could be wrong about.
R&R. I'll take it. And the review (just one) is *quite helpful.* Love to see it.
🚨The article has switched from "Reviewers Assigned" to "Under Review"🚨

It took 7 months but it's happening.
February 7, 2026 at 9:14 PM
Anything about Ludditism or using AI in Woke 2 is the epitome of a Bsky-only problem. Unless there's widespread job losses, people when faced with various real world problems will not rank "Can/Must I use an LLM at work?" all that highly.
February 7, 2026 at 8:06 PM
Interesting convo in the thread. Just going off this though, I do think a number of claims about LLM capabilities are actually economic judgements that are passed off as technical judgments.
There used to be a pretty common understanding that earlier AI systems could automate human activities within actually replicating human intelligence. So humans might need intelligence to, eg, win at Chess or Go, but machines don't. I think that was unwisely lost/buried.
February 7, 2026 at 7:36 PM
Thinking that "malicious incompetence" may not end up being totally accurate in this particular case...

www.theguardian.com/us-news/2026...
NSA detected phone call between foreign intelligence and a person close to Trump
Whistleblower says that Tulsi Gabbard blocked agency from sharing report and delivered it to White House chief of staff
www.theguardian.com
February 7, 2026 at 6:10 PM
Can we infer from the proficiency of the latest coding agents that Claude Code is a model of the Coder Mind?
February 6, 2026 at 9:02 PM
A general thought, as I work on revisions to a paper: I do think this holds up, and it's not that Gemini/etc. isn't good at doing exactly what it's trained to do. It's that, at least in part, Gemini has nothing to say. It makes no arguments, no claims, nothing that gives food for thought.
The reports I get from Gemini Deep Research are not quite that. They...look like reports. And I can make use of them because I can do some kind of discernment. But I'm not sure it's RLHF so much as: LLMs don't make arguments. They produce things that look like arguments. There is a difference here.
February 6, 2026 at 5:46 PM
One day when the various Horrors have passed and I've caught up on my reading (lol), I gotta get around to these apes.

www.science.org/doi/10.1126/...
Evidence for representation of pretend objects by Kanzi, a language-trained bonobo
Secondary representations enable our minds to depart from the here-and-now and generate imaginary, hypothetical, or alternate possibilities that are decoupled from reality, supporting many of our rich...
www.science.org
February 5, 2026 at 11:42 PM
Reposted by Vincent Carchidi
Unfortunately, these days the world of AI is largely dominated by anti-intellectualism. And so the AI-peddlers come along and claim that "language is solved", as if having gotten reasonably good at language modeling weren't making questions about language all the more acutely important. 5/n
February 5, 2026 at 3:53 PM
Benchmarks don't measure competence. If they did, the white collar bloodbath would have happened a while ago. They measure something else, which appears tangentially related to what we might call a human competence in a certain domain.
February 4, 2026 at 7:26 PM
"The danger of the current technological trajectory is that an overemphasis on perfecting the science of control through AI will lead to the atrophy of the art of command and the misalignment of violence."

To be read more closely. Not an area where I think the balance is self-evident.
February 4, 2026 at 5:51 PM
Not gonna name names, but when the paper came out in 2024 arguing that LLMs can't learn "impossible languages," one of the authors posted on X: 'We always knew Chomsky was wrong, and now we can prove it!'

That's what I mean by "bullshit."
I think the second group is prone to some bullshit, in the sense of using the outrage to force an academic settlement on issues they were already settled on.

But on the politics/activism side...I think it's basically right that this is more indisputably unforgivable than his foreign policy views.
February 4, 2026 at 4:55 PM
Three categories of Chomsky reactions:

- People who admired his activism and see this as a betrayal

- People who disliked his scholarly work and see this as a vindication

- People who admired his activism and scholarly work who likely feel, uh, confused atm
spent like half my life going to the mat for Chomsky and I have been humiliated and betrayed beyond measure like actually you know what aspects WAS wack. maybe those lexical functional grammar freaks were right. I can’t do this anymore
February 4, 2026 at 4:52 PM
Reposted by Vincent Carchidi
They called it AI and now I gotta debate the definition of intelligence with my barber
February 3, 2026 at 8:13 PM
R&R. I'll take it. And the review (just one) is *quite helpful.* Love to see it.
🚨The article has switched from "Reviewers Assigned" to "Under Review"🚨

It took 7 months but it's happening.
February 3, 2026 at 10:14 PM
Gonna be a bit before the paper is done, but I do think this line of thought can fruitfully interact with usage-based linguistics. Just have a hunch that it shouldn't be ignored.
Can of worms indeed, but to skip ahead to where I think this question ends up: if our language use is controlled by internal stimuli, why do our words fit situations and find coherence with the context sensitive thoughts of others? E.g. why are my words suitable to this topic, and not another?
February 3, 2026 at 8:14 PM
Reposted by Vincent Carchidi
Everyone keeps trying to win an argument that isn’t being had, because the argument they’re actually in hasn’t been named.

Or as I like to call it, the social media experience.
February 3, 2026 at 7:56 PM
The US govt should be involved. But that aside (since we are currently withdrawing from everything), this is basically what I mean when I say the intelligence question is a distraction.

time.com/7364551/ai-i...
February 3, 2026 at 4:51 PM
Glad to see that Sky is back on the platform. Anyway
February 3, 2026 at 4:02 PM
These days, I don't have any more of a problem with someone saying an LLM is "intelligent" than if someone says their laptop "needs to breathe" as they go to remove an obstruction from its vent.
February 3, 2026 at 2:15 PM
Far from the only thing I've read recently that's given me this impression, but academic writing styles really did used to be diverse, lively. It's like I'm reading something written by someone who wants me to read it.
February 3, 2026 at 2:48 AM
Have said similar things before, but: I don't doubt Amodei believes much if not most of what he publicly says. I also don't doubt his written statements are reviewed by in-house lawyers, regulatory specialists, and the company's marketing team before release.
My attempt to take Dario Amodei's new manifesto literally and seriously: as a call for more liberal democracy written by a prime example of the ways it can be overwhelmed nymag.com/intelligence...
February 2, 2026 at 6:31 PM