Benjamin Riley
banner
benjaminjriley.bsky.social
Benjamin Riley
@benjaminjriley.bsky.social
Founder of Cognitive Resonance, a new venture dedicated to helping people understand human cognition and generative AI. Advocate for humans.
Pinned
In April 2025 I delivered a speech at the ASU+GSV ed-tech conference titled "AI Will Not Revolutionize Education." It touches on human cognition, gen AI, and the nature of scientific and social revolutions. I worked hard at this, I hope you'll watch and share.

www.youtube.com/watch?v=u0_t...
AI Will Not Revolutionize Education
YouTube video by Cognitive Resonance
www.youtube.com
Curious what those tech execs say to you @reckless.bsky.social!
I’ve been running around asking tech execs and academics if language was the same as intelligence for over a year now - and, well, it isn’t. @benjaminjriley.bsky.social explains how the bubble is built on ignoring cutting-edge research into the science of thought www.theverge.com/ai-artificia...
November 25, 2025 at 1:56 PM
Reposted by Benjamin Riley
nailed it:
November 25, 2025 at 1:04 PM
Grateful to The Verge for publishing my essay on why large-language models are not going to achieve general intelligence nor push the scientific frontier.

www.theverge.com/ai-artificia...
Is language the same as intelligence? The AI industry desperately needs it to be
The AI boom is based on a fundamental mistake.
www.theverge.com
November 25, 2025 at 12:49 PM
"Responding to AI by doubling down on the humanity of the humanities appears to be working...[O]ur ever-greater reliance on nonhuman interlocutors and assistants has given new value to the very fact of face-to-face exchange between humans."

AI improves education by forcing human resistance.
I’m a Professor. A.I. Has Changed My Classroom, but Not for the Worse.
www.nytimes.com
November 25, 2025 at 11:52 AM
Hello! One of Enron's incredibly junior investment bankers here. One thing we discussed a lot at the time is that we couldn't trace the business model of the company due to its Byzantine accounting structure. But Enron was so high flying on Wall Street, we assumed we were missing something.
it's happening dot gif
November 25, 2025 at 12:45 AM
Report from the front lines of epistemological collapse. Of course this was happening on "X" at the very same time Grok was vomiting nonsense about Elon Musk as an infallible godhead.

If only there were physical spaces where humans were resisting this. We might call them "schools."
I wrote about the fake account blowup on X this weekend. A genuine post-truth nightmare and proof that these companies have polluted their platforms so thoroughly and traded reality for profit that they've undermined the very idea of what the internet is supposed to be.
That MAGA Account Might Be a Troll From Pakistan
How X blew up its own platform with a new location feature
www.theatlantic.com
November 24, 2025 at 4:47 PM
Reposted by Benjamin Riley
This is a genuinely shockingly bad essay - I would not accept reasoning and evidence this poor from the freshmen I teach.
I try not to do ad hominem around here, but this essay is so unbelievably stupid, I was stunned when I found it was authored by a NYT staffer.

There's a reason reporters generally have beats, so they actually know something about the topic they're writing about. This reporter simply does not.
November 24, 2025 at 2:44 PM
I try not to do ad hominem around here, but this essay is so unbelievably stupid, I was stunned when I found it was authored by a NYT staffer.

There's a reason reporters generally have beats, so they actually know something about the topic they're writing about. This reporter simply does not.
November 24, 2025 at 2:23 PM
Cognitive automation for teachers poses the very same problems that cognitive automation poses for students. Astute observations here on how the cult of efficiency runs counter to just about everything we know about learning.
November 24, 2025 at 2:14 PM
Despite all the hype around superintelligence or whatever, there's still a very simple thing that AI models are completely incapable of: obeying a time limit. Why? And what does this suggest about the human relationship with time?

All that plus some Ferris Bueller in my latest essay.
On artificial time
Why can't you tell a chatbot how long to work on something?
open.substack.com
November 24, 2025 at 12:48 PM
Meta may be able to shut down its own internal research but it can't stop parents from observing their kids.
Meta halted internal research that purportedly showed (young) people who stopped using Facebook became less depressed and anxious, according to an unredacted legal filing released on Friday. www.cnbc.com/2025/11/23/m...
Meta halted internal research suggesting social media harm, court filing alleges
Meta is alleged to have halted internal research suggesting social media harm, according to court documents.
www.cnbc.com
November 24, 2025 at 1:26 AM
Surprisingly, this post got traction last night. So now I want to tell you about Clay Shaw, and Jim Garrison's horrific homophobic prosecution of him, a nightmare of Les Miserables-like proportions.
This thread may be long, but there are lessons for us today in knowing what happened. Here goes...
That Oliver Stone made a movie lionizing Jim Garrison's ludicrous prosecution of an innocent man (Clay Shaw) is one of the most egregious historical transmogrifications in cinema. Imagine in 20 yrs a Stephen Miller hagiography starring Chalamet, that's the artistic crime level we're talking.
We do know who killed JFK. The Warren Commission was an incredibly thorough and good-faith effort to prove and document what had happened, which they did. The conspiracy theories arose in spite of best efforts to avoid them, but there really isn't anything they could have reasonably done better.
November 23, 2025 at 12:53 PM
Another brilliant essay from @eryk.bsky.social, this time he writes the history of ChatGPT against the backdrop of mass loneliness and social distance. This history is still unfolding and the signs are not encouraging for where we are headed.
What was ChatGPT? Now nearly three years old, we can look at OpenAI's LLM as a product of its time, optimized ever since to its earliest uses. While this period of deep disorientation and social isolation has been obscured from public memory, it remains embedded within the interface.
What Was ChatGPT?
A Chatbot Optimized for Social Distance Three years after the launch of ChatGPT, we can finally speak in hindsight about what it was and how it came to be. Its meteoric rise shocked the world, gather...
mail.cyberneticforests.com
November 23, 2025 at 12:43 PM
Historian Steven Mintz has suddenly become essential reading on AI and human cognition. "I could no longer assume that human minds had wrestled with the material."

Sometimes I say AI is a tool of cognitive automation, but it's also tool of cognitive counterfeiting.

(Link below)
November 23, 2025 at 11:22 AM
That Oliver Stone made a movie lionizing Jim Garrison's ludicrous prosecution of an innocent man (Clay Shaw) is one of the most egregious historical transmogrifications in cinema. Imagine in 20 yrs a Stephen Miller hagiography starring Chalamet, that's the artistic crime level we're talking.
We do know who killed JFK. The Warren Commission was an incredibly thorough and good-faith effort to prove and document what had happened, which they did. The conspiracy theories arose in spite of best efforts to avoid them, but there really isn't anything they could have reasonably done better.
My first political memory - exactly 62 years ago right now, a 4-year-old boy trying to understand his mom's tears - is still the biggest event in my lifetime. We still (IMO) don't *really* know who killed JFK, but we know the public's trust was shattered. It's a straight line to today's mess
November 23, 2025 at 12:16 AM
Who could have predicted this? Besides anyone with a basic understanding of how generative AI works, I mean?
Almost as soon as a consumer advocacy group began testing an A.I.-enabled toy bear, trouble began. Instead of chatting about homework or bedtime, testers said it sometimes spoke of knives and sexual topics. They warned that toys like it could allow children to stray into inappropriate exchanges.
A.I. Toy Bear Speaks of Sex, Knives and Pills, Consumer Group Warns
The chatter left startled adults unsure whether they heard correctly. Testers warned that interactive toys like this one could allow children to stray into inappropriate exchanges.
nyti.ms
November 22, 2025 at 10:34 PM
Pair this with @audreywatters.bsky.social's report from a recent conference at Duquesne, "the first time in my 15+ years as an education writer...where No was presented as a viable [and even] moral response to computing." [Link below]

Catholicism is arguably the center of AI resistance.
November 22, 2025 at 4:44 PM
Reposted by Benjamin Riley
We spent a year investigating billionaires for @washingtonpost.com.

We found: the wealthiest 100 Americans gave $1.1 billion to influence the 2024 elections — 140x more than they did in 2000. And almost all of that giving boosted Republicans.

washingtonpost.com/politics/int...
November 21, 2025 at 2:56 PM
New research supporting the radical proposition that trying to learn something works better than not trying.

theconversation.com/learning-wit...
Learning with AI falls short compared to old-fashioned web search
Doing the mental work of connecting the dots across multiple web queries appears to help people understand the material better compared to an AI summary.
theconversation.com
November 21, 2025 at 1:29 PM
Why not an early morning thread using this data to explore the concept of "identity-protective cognition," and why understanding how it operates may be vital to understanding the role of misinformation in our digital age? This will get two likes, surely, but here goes...
A horrifying graphic from Pew showing the fruits of the right wing war on public health.

Only 48% of Republicans believe that vaccines prevent serious illness, while 80% of Democrats do.

This is not a difference of opinion in science. This is the result of a ruthless disinformation campaign.
November 21, 2025 at 12:12 PM
Reposted by Benjamin Riley
More Caesar the No Drama, Anti-Fascist Llama is the best character at Portland ICE protests! #iceout #portland
October 31, 2025 at 4:42 AM
Good day to be squinting.
I didn't predict what happens next in my contribution, but I did write about the time three years ago that someone told me there would be no writing classes within 2 years, about what I continue to value about the writing classroom, and about teaching a writing course about AI.
November 20, 2025 at 9:19 PM
"When private companies [OpenAI] can reach half-trillion-dollar valuations while remaining exempt from disclosure requirements designed to protect systemic stability, the public-private distinction no longer serves its intended function."

substack.com/inbox/post/1...
The Auditor Paradox: How OpenAI’s Governance Gap Exposes Silicon Valley’s Circular Capital Machine
A $500 billion company building artificial general intelligence relies on a twelve-person accounting firm.
substack.com
November 20, 2025 at 7:07 PM
Man the discourse pivoted quickly from "superintelligence and abundance, coming soon!" to "uhhh economic collapse in the short run but that will also be good, here's why:..."
What if the A.I. bubble “is an inevitable part of developing and adopting a revolutionary tool that will fundamentally improve productivity and growth?” Mohamed El-Erian writes.
Opinion | A.I. Is a Bubble. Maybe That’s OK.
Investors’ excitement rightly reflects the potential transformation of the entire economy.
nyti.ms
November 20, 2025 at 6:27 PM
"It’s clear that schools have become a battleground for AI companies who desperately want to get their product ingrained into as many institutions as possible"

And @aft.org & @rweingarten.bsky.social are eagerly helping. OpenAI's announcement yday was replete with references to their partnership.
November 20, 2025 at 5:13 PM