Vincent Carchidi
vcarchidi.bsky.social
Vincent Carchidi
@vcarchidi.bsky.social
Defense analyst. Tech policy. Have a double life in CogSci/Philosophy of Mind. (It's confusing. Just go with it.)

https://philpeople.org/profiles/vincent-carchidi

All opinions entirely my own.
Pinned
Sharing a new preprint on AI and philosophy of mind/cogsci.

I make the case that human beings exhibit a species-specific form of intellectual freedom, expressible through natural language, and this is likely an unreachable threshold for computational systems.

philpapers.org/rec/CARCBC
Vincent Carchidi, Computational Brain, Creative Mind: Intellectual Freedom as the Upper Limit on Artificial Intelligence - PhilPapers
Some generative linguists have long maintained that human beings exhibit a species-specific form of intellectual freedom expressible through natural language. With roots in Descartes’ effort to distin...
philpapers.org
The Ilya interview is worthwhile, though most of it leans into a very lengthy discussion of the problems (he sees) with imagining a hypothetical AGI and its impacts. Which gets a little too far from being grounded IMO.
"The amount of pre-training data is very very staggering, and somehow a human being...with a tiny fraction of pre-training data, knows much less, but whatever they do know they know much more deeply. Somehow. Already at that age, you would not make mistakes that our AIs make."
November 25, 2025 at 8:41 PM
"The amount of pre-training data is very very staggering, and somehow a human being...with a tiny fraction of pre-training data, knows much less, but whatever they do know they know much more deeply. Somehow. Already at that age, you would not make mistakes that our AIs make."
November 25, 2025 at 7:39 PM
I just don't understand how the FedEx and UPS websites are as bad as they are. Pages that never load, endless loops, sign in credentials not recognized. It has to be intentional.
November 25, 2025 at 4:58 PM
I don't think there's some deep meaning behind this, but there is some convergence among the most vocal language-is-for-thought scholars and the language-is-for-communication scholars on LLMs not being sufficient for general intelligence. Not total, but it's there. Worth observing.
Incidentally, Marcus and Fedorenko come from schools of thought in CogSci that are very intensely opposed to one another! So the convergence there is interesting...
November 25, 2025 at 4:38 PM
Props to @benjaminjriley.bsky.social for bringing some CogSci to the tech world. Worth a read 👇
I’ve been running around asking tech execs and academics if language was the same as intelligence for over a year now - and, well, it isn’t. @benjaminjriley.bsky.social explains how the bubble is built on ignoring cutting-edge research into the science of thought www.theverge.com/ai-artificia...
November 25, 2025 at 2:42 PM
More a parlor trick than anything, but I guess they trained Gemini to explicitly say it hallucinated when it a user seems like it's calling out a hallucination (I imagine a lot of user prompts look like mine here in response to an obviously wrong answer).
November 25, 2025 at 3:17 AM
Just thinking about this...like I say in the thread, I think the targets are good, as ideals.

I'm just struck by how *direct* the pathway is from SV hype framing to the White House. This EO doesn't get written - at least the way it is - if there wasn't a very open channel between the two.
"The Genesis Mission will build an integrated AI platform to harness Federal scientific datasets...to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs."

www.whitehouse.gov/presidential...
Launching the Genesis Mission
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered: Section 1.  Purpose.
www.whitehouse.gov
November 25, 2025 at 12:51 AM
"The Genesis Mission will build an integrated AI platform to harness Federal scientific datasets...to train scientific foundation models and create AI agents to test new hypotheses, automate research workflows, and accelerate scientific breakthroughs."

www.whitehouse.gov/presidential...
Launching the Genesis Mission
By the authority vested in me as President by the Constitution and the laws of the United States of America, it is hereby ordered: Section 1.  Purpose.
www.whitehouse.gov
November 25, 2025 at 12:35 AM
Of the opinion that the state of the art has improved since 2022 (I dont care if we call it "reasoning" or whatever for this), but I've been thinking about how this would have played out if expectations were set appropriately by Altman and co. at that time.
Complete AI/debt saturation coverage this am:

@wsj.com @bloomberg.com
November 24, 2025 at 2:24 PM
I'll always instinctively side with those being threatened with obsolescence, but damn there really is a need for better AI critics
November 24, 2025 at 1:33 AM
Ended up having somewhat mixed feelings about the first half of this paper, but it turned out to be pretty solid re: testing and deployment of genAI for safety-critical systems. Echoes a bit of what I've written about before wrt to lower standards for deployment of DL models in particular.
November 23, 2025 at 3:54 PM
Reposted by Vincent Carchidi
Journalist challenge: Use “Machine Learning” when you mean machine learning and “LLM” when you mean LLM. Ditch “AI” as a catch-all term, it’s not useful for readers and it helps companies trying to confuse the public by obscuring the roles played by different technologies. 🧪
November 22, 2025 at 4:50 PM
Worth a try if you've had more trouble finding quality papers recently 👇Finding it useful already with a single, not very refined prompt.
That was fast. Tonight's bedtime research, I guess...
November 22, 2025 at 4:44 PM
Related panel discussion, also good (more wide ranging).

Choice quote from Tomasello: "If all life on earth was wiped out, ChatGPT would sit on my desk doing nothing, but my home heating system - run by a thermostat - would keep heating my house...it has a goal..."

www.youtube.com/watch?v=oJ9Z...
November 22, 2025 at 3:26 PM
Some people saying she wants to be the GA governor, but this doesn't read like a statement in the run up to a governor's race...
unfortunately, the imges just got shared and oh boy its uh

something
November 22, 2025 at 1:24 AM
To be read. Possibly low-hanging fruit, but Missy Cummings tends to be better than most.

openreview.net/forum?id=uEY...
Prohibiting Generative AI in any Form of Weapon Control
This position paper argues that the use of generative artificial intelligence (GenAI) to control, direct, guide or govern any weapon, either in situ or remotely, should be prohibited by government...
openreview.net
November 21, 2025 at 10:38 PM
Thinking about the Yale professors who fled the US out of fear of prosecution while Mamdani and Trump pal around in the Oval
November 21, 2025 at 10:04 PM
Pretty significant for those interested:

"[The Public Investment Fund's] representatives have begun telling international investors that it is all but unable to allocate any more money for the foreseeable future, six people with knowledge of those discussions said."

www.nytimes.com/2025/11/19/b...
Saudi Arabia’s Prince Has Big Plans, but His Giant Fund Is Low on Cash
www.nytimes.com
November 21, 2025 at 3:08 PM
One ironic consequence of this (referencing G. Marcus) is that, if you want substantive, focused research in safety and reliability in ML/AI these days, you're more likely to find it in academia than industry.
Not looking to do a defense of him, but I would say these two basic things:

Too many resources have been poured into ML specifically;

In part a result of this (not only), insufficient resources have gone to making systems reliable upon deployment, particularly in sensitive domains.
November 20, 2025 at 6:05 PM
As someone who was very supportive of the original motivation for the ARC-AGI benchmark, I've come around to thinking there's two ways of looking at it:

(1) It is directionally significant that the first version could not be saturated until test-time "adaptation" techniques came around.
November 19, 2025 at 6:59 PM
This might have been meant as a hot take, but I do think there's a very substantive point here. And I also think a similar point applies to LLMs (as he says in the thread).
This prompts me to reflect on the exact differences between humans and other primates again. They seem to have the raw capability to do many things we do, but usually not the motivation to discover them independently. Hm...
Chimpanzees use 22 types of tools in the wild but they are capable of doomscrolling like the rest of us.

This video of Sugriva using Instagram is shocking because it's so similar to how humans use social media--engaging with content that triggers our emotions, like images of our friends and foes.
November 19, 2025 at 6:09 PM
Good talks by Mitchell and Gopnik (title only says Mitchell).

Centering on the question of whether cognition is even the right frame to evaluate AI models.

youtu.be/yKbeCvOKvMc?...
Melanie Mitchell, Evaluating Cognitive Capacities in AI Systems | Natural Philosophy Symposium 2025
YouTube video by Hopkins Natural Philosophy Forum
youtu.be
November 19, 2025 at 3:16 PM
I think these blinded tests have lost the plot. I don't normally say something is "academic" as a criticism, but this sort of thing is academic.

Fine-tuning and other sorts of stage-setting to boost the appearance of humanlike outputs in restricted domains is interesting yet not especially useful.
November 19, 2025 at 3:12 PM
Reposted by Vincent Carchidi
The issue with bubbles isn't that they're built completely on hoaxes. A few may be, but most are built on overexcitement about a very real technology and its promises. People just get way ahead of themselves, and think that everything that glitters must be gold.
November 19, 2025 at 1:52 PM