Owen Leonard
banner
owenleonard.bsky.social
Owen Leonard
@owenleonard.bsky.social
PhD student in English at UC Santa Barbara, working on critical AI/ML.

See my work at owenleonard.dev and check out the Center for the Humanities and Machine Learning at huml.ucsb.edu.
maybe blogs or listservs that I don’t know about are doing this work, but that whole ecosystem seems very diffuse
November 29, 2025 at 9:35 PM
I do think there may be a need for some kind of preprint server for humanities-inflected position papers—for instance, the DeepSeek-OCR release last month could have produced a lot of interesting 5-6 page reflections on word and image that wouldn’t really fit the journal format
November 29, 2025 at 9:35 PM
there’s an indexical aspect to stock photos like these that’s really lacking in AI images
October 3, 2025 at 2:05 AM
did the great tragedians ever consider simply instructing their readers to feel sad?
September 26, 2025 at 11:53 PM
something has been wrong with Chronicling’s (or LOC’s?) rate limiter for a while, because I had similar issues earlier this year
September 11, 2025 at 3:27 PM
that is, the cultural and economic forces that have shaped the development of AI are in many ways the same forces that have shaped the development of our notion of “the human” as such—making it a poor locus of resistance
August 20, 2025 at 3:59 AM
I think in general it’s somewhat irresponsible to stake your critique of AI on some stable, essential idea of what it means to “be human” or “act human”—not only are those categories extremely flexible, they inherit heavily from the same tradition of rationalism that got us here in the first place!
August 20, 2025 at 3:56 AM
(this objection is directed at Chollet, of course, not you)
August 15, 2025 at 11:47 PM
What good could ARC-AGI possibly be if systems can do well on it without exhibiting generalizable intelligence??
August 15, 2025 at 11:47 PM
If your response to someone achieving good performance on your benchmark is to complain that they didn’t do it the way you wanted to, you have a bad benchmark
August 15, 2025 at 11:46 PM
nobody wants to drown hungry!
August 15, 2025 at 8:40 PM
If I’m reading this right, Sam is basically saying outright that the GPT-5 personality changes are in response to the “ChatGPT psychosis” panic??

If so, I’m surprised that OpenAI is so rattled.
August 13, 2025 at 12:20 AM
Of course LW’s conception of “forms of life” is anthropomorphic, and it’s hardly fair to expect him to have foreseen the present state of NLP, but the fact that LLMs exhibit linguistic competence *would* seem to challenge the idea that shared word-conventions must reflect agreement in forms of life.
August 12, 2025 at 7:09 PM
It’s just not obvious to me that the operations of an LLM in silica do not constitute a form of life, if a very alien one. I may be misreading PI but it seems to me that any experience which imparts linguistic competence qualifies as a form of life, and that LLM training is such an experience.
August 12, 2025 at 7:07 PM
PI also warns against “the temptation to invent a myth of meaning”—which so many (shallow) critiques of LLMs do by equating meaning to the favored signifiers of humanism (art, beauty, the soul, etc.)
August 12, 2025 at 6:15 PM
Yes, absolutely. The work then is to specify what those ways of meaning-making are and why they are important.
August 12, 2025 at 5:51 PM
in other words, I guess, is touching silicon so different from touching grass?
August 12, 2025 at 5:49 PM
Wittgenstein says that “we are talking about the spatial and temporal phenomenon of language, not some non-spatial, non-temporal phantasm”—but then again, Matt’s work so effectively reminds us that computation is not as non-spatial and non-temporal as it is sometimes made to seem.
August 12, 2025 at 5:49 PM
But I do think it’s worth asking where the invocation of the symbol grounding problems leads rhetorically wrt general-purpose value judgments about LLMs, and I think it’s often to the same kind of shallow, knee-jerk humanism that’s been a feature of AI discourse since time immemorial. end/
August 12, 2025 at 5:34 PM
To be clear, I don’t think that you’re doing this here and I do think that the symbol grounding problem is a much more salient and well-founded characterization of LLMs than glorified autocomplete, just linear algebra, stochastic parrots, and so on. 3/
August 12, 2025 at 5:30 PM
And I think the answer often involves an implicit or explicit appeal to some value-laden humanistic category (art, love, beauty, empathy, etc.) that is assumed to be accessible only to beings that use language in an embodied, grounded way. 2/
August 12, 2025 at 5:29 PM