Hannes Bajohr
hannesbajohr.de
Hannes Bajohr
@hannesbajohr.de
Working on AI, political theory, and the German philosophical tradition in the 20th c
German Dept #UCBerkeley
I forgot to add this recent preprint to this thread: "Surface Reading LLMs: Synthetic Text and its Styles" discusses ways for the humanities that are not computational to approach the outputs of LLMs. Style, I contend, is a probabilistic notion we can still make use of.

bsky.app/profile/hann...
Read my new preprint "Surface Reading LLMs: Synthetic Text and its Styles." I argue that we should not only look at the depth behind LLMs, but also take the surfaces seriously with which they present us: as the principal plane on which we encounter them in the life-world.
Surface Reading LLMs: Synthetic Text and its Styles
Despite a potential plateau in ML advancement, the societal impact of large language models lies not in approaching superintelligence but in generating text surfaces indistinguishable from human writi...
arxiv.org
December 11, 2025 at 11:12 PM
My two year olds favorite genre (didn't know the word made it into English already!)
November 24, 2025 at 4:37 PM
Reposted by Hannes Bajohr
Welp.
November 22, 2025 at 1:50 PM
Put differently: This is "langue" in action.

Yet this, too, holds: Seen from the outside, as a reader of the readings, the quotation marks around "writing" and "reading" are invisible anyway.
November 14, 2025 at 7:59 PM
But when the production of text, in a simulation of "writing," and its reception, in a simulation of "reading," succeed to this extent, I find the claim that intention is the necessary locus of grounding increasingly implausible, and find the neostructuralist explanations all the more convincing.
November 14, 2025 at 7:59 PM
I am not making any deep theoretical claim here, but at least anecdotally, it seems less and less likely that holding on to the notion of "communicative intent" as the criterion for meaning makes much sense anymore. Yes, the models string tokens together.
November 14, 2025 at 7:59 PM
But this is the sedond point: The model has progressed from a model of writing to a model of _reading_. It can behave as if it were a receiver who, with Iser and Ingarden, fills in the "Leerstellen" and expands on the "points of indeterminacy" of the text.
November 14, 2025 at 7:59 PM
I am not saying the reading that NotebookLM produced is extraordinary - but it is nonetheless impressive to see what happens when this text _is_ read.
November 14, 2025 at 7:59 PM
In other words, the context of publication steered and curtailed the reception into non-reception. This was to be expected, especially since the press at the time focused on the fact that "AI cannot yet narrate," and thus used the novel to make a broader point.
November 14, 2025 at 7:59 PM
Two observations: First, this is the most serious interpretation of what appeared for pretty much all interpreters at the time of publication as a nonsense text – which was understood to be nonsense necessarily not least because it was written by/with an AI.
November 14, 2025 at 7:59 PM
Claude now is able to place the text as experimental writing and can extract the plot points rather well. Even more impressive is what NotebookLM does with the book (which I uploaded without the explanatory afterword) when asked to produce one of its "deep dive" podcast outputs (posted above).
November 14, 2025 at 7:59 PM
As I wrote in my afterword, whatever the novel produced in the way of meaning is, then, mostly an effect of its reception.

This was 2023. As the novel is being translated into English (by a human!), to be published next year with MIT Press, I tried the automatic summary again.
November 14, 2025 at 7:59 PM
As I was halfway done, I asked Claude - the only LLM at the time that could handle texts of up to 100 pages - to summarize it. The reply: Impossible - no structure or plot, and maybe too much written for humans to be understood by a lowly AI like itself.
November 14, 2025 at 7:59 PM