Rob Horning
@robhorning.bsky.social
4.4K followers 250 following 450 posts
robhorning.substack.com
Posts Media Videos Starter Packs
robhorning.bsky.social
that users could prefer a generated simulation to actual old clips for nostalgia purposes clarifies how nostalgia is about consuming "decontextualization" in itself — nostalgia negates history under the auspices of longing for it
nitishpahwa.com
YouTube has a legit library of recordings from quotidian settings (which are interesting, mostly as historical markers) but instead of promoting that social media pushes soulless facsimiles solely meant to associate a feeling with a moment sans the immediate, substantive context
mugrimm.bsky.social
This is doing numbers on social media right now and it's so depressing how people truly yearn for this shit and want to preserve that feeling indefinitely like a mausoleum of false memories.
robhorning.bsky.social
social platforms are the last place one should go to try to find out "what people are saying" (though they may give hints on what your data suggests companies think you should believe to make you most manipulatable)
robhorning.bsky.social
any "social platform" seems likely to be overrun by generated text that enacts "new conspiracism"/parasocial participation in ideas that carry a libidinal charge
robhorning.bsky.social
"new conspiracism" doesn't explain anything but is a means for isolated individuals to experience "social validation" on demand, in the absence of a verifiable public — a way to intensify the gratification of parasociality www.nplusonemag.com/issue-51/pol...
 the new conspiracism has its technological basis in digital platforms and the rise of reactionary influencers and “conspiracy entrepreneurs.” Outlandish and pointless fantasies, like the conspiracies circulated by QAnon or the alleged staging of the Sandy Hook school shooting, exist to be recited and shared, acting as instruments of online influence and coordination rather than narratives to make sense of the world. They may identify enemies and reinforce prejudices, but they don’t explain anything or provide a political plan. The only injunction of the new conspiracist is that their claims get liked, shared, and repeated. Engagement — and revenue — is all.
robhorning.bsky.social
“infinite video” means not infinite entertainment but infinite boredom; the death drive incarnate
robhorning.bsky.social
it's perhaps self-evident that generative video makes the world more boring, but one could hope it would re-enchant those forms of visual experience that resist simulation
robhorning.bsky.social
the idea that some videos are intrinsically interesting to watch (regardless of whether they have any reference to events or things in themselves, any kind of auratic appeal) feels like it can't survive generative models, which makes all forms of mere seeing trivial
robhorning.bsky.social
but it seems like there is something too naked about it; how does ideology work when it has not even a flimsy alibi? How do people enjoy overt simulations? What makes Disneyland fun?
robhorning.bsky.social
generated video allows consumers to inoculate themselves against events and representations that don't conform to their schema by instantly offering alternatives that soothe them and match their expectations: They can enjoy their own ideological interpellation as a movie, or an endless feed
robhorning.bsky.social
to some extent, all media does this — pattern reality ideologically and make some kinds of events seem normal and others unrepresentable; make some explanations for why things happen seem obvious and others inconceivable
robhorning.bsky.social
this from Yves Citton's Mythocracy is maybe useful for thinking about Sora 2 and other slop feeds: Generated video constitutes an "imaginary of power" that gives consumers pictures of how they've been trained to believe things are "supposed to be"
The imaginary of power is not, therefore, a 'theory' that comes along after the fact to provide the analytical explanation of the images circulating around us. Rather, it is a set of schemas that we experience insofar as we use them. It is a set of imagos, of 'expansive forms' (patterns, Gestalt) that shape our expectations inasmuch as we are able to reconfigure them. They are spectacles that help us see 'reality' only by filtering what we see of it.
robhorning.bsky.social
wonder if the ease and rapidity with which "AI" can generate right-wing fantasy images and propaganda makes them more convincing for their consumers — as though one shouldn't have to use their own imagination to manifest the bigotry they insist on www.lrb.co.uk/blog/2025/se...
everyone knows the videos aren’t real, but I was missing the point: ‘It’s about us showing everyone what’s really happening.’
robhorning.bsky.social
agree, just think some find this kind of text reassuring—confirmation that they are right not to care about reading, writing, or any conventional sort of literacy
robhorning.bsky.social
LLMs mean that no one has to write anything they don't care about, but they also mean that "writing anything" will get equated with "not caring" for most people. (If you really cared, you would video yourself talking about it on your phone.)
robhorning.bsky.social
you can help generate so much slop that "the ear" would be deafened forever, and no one could ever call your own into question, and you can make all your necessary "discoveries" elsewhere, through some other means, in some realm of only right and wrong answers that makes "discovery" moribund anyway
robhorning.bsky.social
if all writing can be made merely functional and perfunctory, then the aesthetic quality that seemed inherent to it (what it takes an "ear" to appreciate) could be eradicated; no more écriture, just code
robhorning.bsky.social
if you don't experience "writing as discovery," LLMs allow you to experience the negation of that, and possibly even take joy in seeing others chagrined by the apparent invalidation of that cliche
robhorning.bsky.social
many people are not interested in "writing to discover what they are thinking," or to refine their thinking, etc. because they are not interested in making such discoveries or taking on the burden and the narcissism of having them
robhorning.bsky.social
not bad advice, but presumes that most people read and write to experience "charm, surprise, and strangeness" when the opposite may be the case www.nplusonemag.com/issue-51/the...
For publishers, editors, critics, professors, teachers, anyone with any say over what people read, the first step will be to develop an ear. Learn to tell — to read closely enough to tell — the work of people from the work of bots. Notice the poverty of the latter’s style, the artless syntax and plywood prose, and the shoddiness of its substance: the threadbare platitudes, pat theses, mechanical arguments. And just as important, read to recognize the charm, surprise, and strangeness of the real thing. So far this has been about as easily done as said. Until AI systems stop gaining in sophistication, it will become measurably harder. Required will be a new kind of literacy, an adaptive practice of bullshit detection.
robhorning.bsky.social
"Not Supposed to Break Down"
robhorning.bsky.social
What does it mean to "optimize" for this condition — to train users to enjoy it? Why is it most profitable for companies to train us in wanting to pay attention as a way of avoiding rather than seeking meaning? www.noemamag.com/the-last-day...
The problem is not just the rise of fake material, but the collapse of context and the acceptance that truth no longer matters as long as our cravings for colors and noise are satisfied. Contemporary social media content is more often rootless, detached from cultural memory, interpersonal exchange or shared conversation. It arrives fully formed, optimized for attention rather than meaning, producing a kind of semantic sludge, posts that look like language yet say almost nothing.
robhorning.bsky.social
seems indicative of how stagnant the ideas behind "AI" are that Baudrillard could write a critique of them in 1995 (The Perfect Crime) and none of it seems dated
danmcquillan.bsky.social
Why is AI so shit? Its shoddy outputs reflect the brittleness of an epistemology that relies fully on abstraction & reduction and actively eliminates relationality and adaptivity. As such, it doesn’t replace human activity but acts as its violent suppression.
Reposted by Rob Horning
hypervisible.blacksky.app
And how does the machine know your intent, you might ask? Well by constant surveillance.

Just kidding, the machine does not “know” your fucking “intent.”
While Ralph Lauren is an early adopter of AI technology, many fashion brands are building their own apps, says Shelley Bransten, corporate VP of global industry solutions at Microsoft. She says that the fashion industry is now shifting from “scroll-based” shopping, which involves looking through rows of thumbnails, to “goal-based” shopping, which deploys AI to surface results based on the customer’s specific needs at that moment. “The shopping experience is going to be more personalized, relevant, and more tied to the customer’s intent,” she says.