John Gallagher
johnrgallagher.bsky.social
John Gallagher
@johnrgallagher.bsky.social
2.3K followers 1.1K following 690 posts
Using qualitative & computational methods to study writers on the internet. I study how machine learning experts communicate. I study the interaction between writers & audiences. Professor @ UIUC. https://publish.illinois.edu/johnrgallagher/
Posts Media Videos Starter Packs
I am teaching Benjamin's "The work of art" this week. I'm irrationally excited. I'm ready to update every passage in the context of GenAI and social media.

“How does the cameraman compare with the painter? To answer this we take recourse to an analogy with a surgical operation.”
When working construction, I once lost feeling in both hands for a month after jackhammering in a basement while standing in 18 inches of ice water in January. I made a vow that my kids would never work a job that broken their bodies.
“Humans can just go get manual labor jobs if desk jobs are repaired with AI” is both not going to suffice to provide enough jobs and is also going to result in massive societal anger because manual labor jobs often *are terrible*, which is why people have worked hard to avoid them for generations.
Even if society provided the food/shelter level needs for everyone, humans also need to perform useful work just for basic fulfillment. The vast majority of people cannot sit around all day letting AI and robots do everything.

(The definition of “useful work” here is malleable but still essential)
Maybe 20 pieces of candy in the past 48 was too much
a DH project ive wanted to see done (not do it myself) would be to correlate the rise of robot fiction with the rise of thinking animals. I have seen this pattern bit it’s only my impressions
intelligence is the thing which i have. admitting things are intelligent means considering them morally and socially equal to me. i will never consider a computer morally or socially equal to me. therefore no computer program will ever be intelligent
Reposted by John Gallagher
This sounds cynical—but represents a huge advance over empty “AGI” speculation.

It’s a political question, not a technical one. Without social equality models *cannot do* many kinds of work (eg, negotiate agreements or manage workers). So they will only be human-equivalent if we decide they are.
intelligence is the thing which i have. admitting things are intelligent means considering them morally and socially equal to me. i will never consider a computer morally or socially equal to me. therefore no computer program will ever be intelligent
I periodically get upset that Bezos spent 100 million on his wedding but the media just shrugged, even celebrated it
When you see Mamdani looking all normal doing normal person things—petting bodega cats, riding public transport—you realize just how bizarre and estranged our average candidate for public office is. Rich martians in flesh suits
i'll throw some money at waymo. but humanoid robots are a long long way off. I eat dinner with engineers building both robots and that heavy machinery.
My neighbors put up Christmas lights today. I dislike the way capitalism has made every moment in a holiday season.
There’s nothing that makes me more irrationally angry than Christmas music in the stores on November 1
I think the analogy is misplaced. Its information commodities, not physical.

But I would be investing in companies that made the machines that made the loom machines. In a gold rush, sell shovels.

In 2025, I'm all in on ASML. I'll ride them to the moon.
8 prices of candy, one bag of chips (fun size), and a cookie. I have a tummy ache. happy Halloween!
I believe it’s content analysis
We have progressed from data collection to data analysis.
I think journals should have a template for reviewers.
I think a really useful to interact with LLMs is to think of your audience as a vector space.
This is an excellent use case.
For those of you doing ML, do you find the model cards useful in any way? What would make model cards more effective?
I love the awkward in-progress prose of students writing out proposals for their final projects. it may be unvarnished prose but there are ideas ready for polishing.
Reposted by John Gallagher
Karpathy starts by saying human supervision is needed only because models aren't yet capable enough. But if you look closely at the obstacle ("I want to learn... and become better as a programmer, not just get served mountains of code that I'm told works"), it doesn't go away "when we reach AGI." +
GPT5 can already do like 500-800 lines of code by itself. I find it's coding skills quite useful but I still gotta implement them.
There is more plot and character development in the first 6 chapters of the Count of Monte Cristo than any *season* of a tv show I've seen in the past ten years.
The perplexity web browser is the end of online education.
Last Friday I gave a keynote at the University of Kentucky, titled, “A Spectrum of GenAI Use in the Classroom.”

I’d love to bring this discussion to other institutions as well as with anyone you think might be interested.
Yesterday, I spent an hour reading reaction from people I don't know about a podcast from journalists (Klein, Coates) who were talking about their writing that was a reaction to a Kirk. My kiddo came up to me, asking for a hug worried about school. I need to get offline. I need to touch more grass.
When you watch children learn, when they're screaming because it's so hard, or they shake because they don't understand, and you hug them and you talk and together you all stop for snack and then the goal finally happens, you realize that learning is not training a model.