Casey Newton
@caseynewton.mastodon.social.ap.brid.gy
140 followers 0 following 140 posts
Email salesman at Platformer and podcast co-host at Hard Fork. [email protected] [bridged from https://mastodon.social/@caseynewton on the fediverse by https://fed.brid.gy/ ]
Posts Media Videos Starter Packs
caseynewton.mastodon.social.ap.brid.gy
Wrote about fresh signs of an AI bubble (and why the industry says the worries are mostly overblown) https://www.platformer.news/ai-bubble-2025/
We have entered the "weird financial tricks" phase of the bubble. A reliable indicator of bubbles is that companies concoct bizarre financial instruments and accounting measures to paper over their problems. In Fortune, Allie Garfinkel reports that venture capitalists are irked at some of the creative accounting going on in their portfolios. Some startups are reporting one-time sales as "recurring revenue." Others technically show "recurring revenue" for what are essentially trials.

And that's on the tame side. For crazy, you can look to Robinhood's suggestion that it might offer "tokenized" equity in OpenAI. By "tokenized," Business Insider reports, Robinhood means "blockchain-enabled representations of securities like stocks." In reality, they have no connection to OpenAI equity whatsoever. But that doesn't mean that consumers shouldn't be able to gamble on the mirage!
caseynewton.mastodon.social.ap.brid.gy
In this edition we're also launching the Following feed, part of our new approach to curating links. Each day we're giving you three to five stories we're following, why we're following them, and what people are saying about them. Today we wrote about reactions to Sora
Zelda Williams, daughter of the late comedian Robin Williams, begged people in an Instagram story to stop sending her Sora videos of her dad. "You're not making art, you're making disgusting, over-processed hotdogs out of the lives of human beings, out of the history of art and music, and then shoving them down someone else's throat hoping they'll give you a little thumbs up and like it," she wrote. "Gross."

Meanwhile, what do average users want? Fewer content restrictions. We found App Store reviews reading “It’s so censored it’s not even fun,” and “the “safeguards in place prevent you from making anything worth watching,” among other complaints.
caseynewton.mastodon.social.ap.brid.gy
I wrote about OpenAI's ambitious platform play and the Cambridge Analytica flashbacks it inspired. Plus, Sam Altman takes my question about enshittification risk https://www.platformer.news/openai-dev-day-2025-platform-chatgpt/
At launch, OpenAI is promising a more rigorous approach to data privacy. OpenAI will share only what it needs to with developers, executives said. (They essentially hand-waved through the details, though, so the actual mechanics will bear scrutiny.) Unlike Facebook, though, OpenAI has no friend graph to worry about — whatever might go wrong between you, ChatGPT, and a developer, it will likely not involve giving away the contact information of all of your friends. 

At the same time, the AI graph may prove even riskier. ChatGPT stores many users’ most private conversations. Leaky data permissions, either intentional or accidental, could prove disastrous for users and the company. It only took one real privacy disaster to end Facebook’s platform ambitions; I can’t imagine it would take much more to end OpenAI’s.
caseynewton.mastodon.social.ap.brid.gy
Here's what everyone is saying about Sora. Of note: several OpenAI employees have posted about their concerns with the product https://www.platformer.news/sora-2-reactions-openai/
It makes OpenAI employees nervous. One of the more surprising reactions to the launch came from OpenAI's own employees, several of whom arched an eyebrow about the company's latest product. "AI-based feeds are scary," posted John Hallman, who works on pretraining at the company. "I won't deny that I felt some concern when I first learned we were releasing Sora 2. That said, I think the team did the absolute best job they possible could in designing a positive experience." (Here's a funny response to that one.)

"I share a similar mix of worry and excitement," responded Boaz Barak, a member of OpenAI's technical staff. "Sora 2 is technically amazing but it’s premature to congratulate ourselves on avoiding the pitfalls of other social media apps and deepfakes."
caseynewton.mastodon.social.ap.brid.gy
Talked to OpenAI about its new parental controls and the effort to make ChatGPT safer for kids — by pushing more responsibility onto parents https://www.platformer.news/chatgpt-parental-controls-child-safety/
You can appreciate the new choices that OpenAI is giving parents here while also noting that parental controls also push at least some of the responsibility for ChatGPT safety onto those same parents — while also requiring those same parents to create ChatGPT accounts of their own. In that, OpenAI joins Meta, Snap, TikTok, and other apps popular with teenagers that all but require parents to take on a part-time job learning and managing the settings of apps that change continuously and introduce new risks as they do. 

At the same time, this is a feature that could absolutely save lives. It’s also a feature that no other chatbot maker has yet introduced, despite high-profile cases of harm on their platforms.
caseynewton.mastodon.social.ap.brid.gy
ChatGPT Pulse is a step toward becoming a proactive assistant that works for you in the background. It's also one more feed to scroll. Sound familiar? https://www.platformer.news/chatgpt-pulse-proactive-ai/
OpenAI says it wants ChatGPT to be helpful rather than addictive, and that’s the way I experience it today. But a feature like Pulse, which looks and feels like a social feed, could create other incentives. (Particularly given that the company is reportedly now also looking for an ads chief, as Alex Heath reported in his new newsletter Sources this week.)

Notably, many OpenAI executives and product leaders came from Meta, including new CEO of applications Fidji Simo. Over time, Facebook’s growth imperatives led it to greatly expand the reasons why it would send you a push notification, until eventually they had no real connection to the user at all. (Today Facebook notified me that someone had listed a 1987 Dodge Ram on Marketplace, a feature that I have never once used.)
caseynewton.mastodon.social.ap.brid.gy
Today is Platformer's fifth birthday! I wrote about the state of the newsletter economy, how the business is going, and what we're planning for year six https://www.platformer.news/platformer-year-five-lessons/
Three, we’re going to create an audio feed. For years now, readers have asked me to narrate each edition so you can listen to it as a podcast. (As pioneered by Ben Thompson’s great Stratechery.) I’ve resisted until now because I worried the added work would burn me out. (Candidly, I still do.) But the success of Hard Fork — one of the 100 best podcasts of all time, by the way! — has introduced our work to a lot of people who would rather listen to me than to read. I’m still in the early stages of planning this, and I expect to encounter many technical challenges along the way. But my hope is that a year from now you’ll get regular audio drops from Platformer that make your subscription feel valuable.
caseynewton.mastodon.social.ap.brid.gy
I wrote about how social media enabled Charlie Kirk while making our politics ever worse. https://www.platformer.news/thursday-newsletter-4/
X remains the platform of choice for most politicos, and continues to play an outsize role in shaping elite perception of our politics. The dark picture of America that takes root there spreads to TikTok, Instagram, and cable news, continuously warping Americans' perceptions of one another.

The vast majority of Americans continue to reject political violence. Civil war is for idiots and losers. But in the dark mirror of social media, we are only ever on the brink of all-out conflict. And the longer we stare at it, the more we risk that fiction turning into reality.
caseynewton.mastodon.social.ap.brid.gy
ChatGPT will soon alert parents when their children express thoughts of self-harm. Will that protect them — or simply drive them elsewhere? https://www.platformer.news/openai-teen-accounts-safety-senate-hearing/?ref=platformer-newsletter
Porn sites offer a relevant point of comparison. After the United Kingdom started to require adult sites to confirm users’ ages this summer, traffic to those sites plummeted. At the same time, sites that flouted the law thrived. Pornhub, for example, saw traffic drop 47 percent in the first two weeks after age-gating was introduced, according to the Guardian.

AI models are becoming smaller and more capable all the time; you can already run very good ones on a modern laptop. It seems entirely possible that teens who are savvy to OpenAI’s alert system simply take their sensitive questions elsewhere.
caseynewton.mastodon.social.ap.brid.gy
After all that hand-wringing about the dangerous Chinese TikTok algorithm, it appears that Trump is going to let the new American TikTok just license it from China. Whatever!! https://www.platformer.news/tiktok-deal-bytedance-china-trump-bessent/
Congress also worried — perhaps unconstitutionally — that the Chinese government would pressure ByteDance to manipulate its recommendation algorithms to sow division in the United States. (Among other things, members of Congress complained that the app seemed too pro-Palestinian.)

Well, now we know what the deal will do to address the threat of ByteDance manipulating the content in Americans' TikTok feeds: nothing. The spun-out company will simply license the recommendations from its parent company, and that will be that.
caseynewton.mastodon.social.ap.brid.gy
The FTC's investigation into chatbots and children suggests that Republicans are *this* close to having an epiphany about AI safety https://www.platformer.news/ftc-chatbots-child-safety/
At the same time, some quarters of the administration appear this close to grasping what has been true all along: that creating a powerful, free, sycophantic digital companion and putting it in the hands of every child in America is not actually a great way to "beat China." It's reckless and has already ended in tragedy.
caseynewton.mastodon.social.ap.brid.gy
The AI as Normal Technology guys are back with a good new post on why they think AI transformation of society will take decades. I'm a fan, but had to ask — doesn't the effect AI is *already* having on education suggest things may go much faster? […]

[Original post on mastodon.social]
To all of these people, AI in the classroom is already an accepted fact of life, long before artificial general intelligence has arrived. And it happened with astonishing speed, because it turned out that even the original version of ChatGPT could do many school assignments as well or better than the average student. That hasn't been true so far with most knowledge worker jobs. But what happens when it does become true? Will the fallout truly unfold over a smooth curve that takes decades to arrive?
caseynewton.mastodon.social.ap.brid.gy
NEW: Google tells me it will re-file a legal brief in which lawyers said "the open web is already in rapid decline" to clarify that they were only talking about open-web display advertising. (The open web is still in rapid decline, though.) […]
Original post on mastodon.social
mastodon.social
caseynewton.mastodon.social.ap.brid.gy
A year ago today, Trump threatened to jail Zuckerberg for the rest of his life. Today, Trump is fighting Meta's battles around the world.

I wrote about everything that Zuckerberg has gotten from Trump 2.0 so far: https://www.platformer.news/trump-zuckerberg-meta-partnership-eu-dsa-ai-dma/

February

Trump issued a memo threatening to retaliate against countries that impose digital service taxes or heavy fines on "cutting-edge American technology companies." "American businesses will no longer prop up failed foreign economies through extortive fines and taxes," he wrote.

The memo had its intended effect. India dropped its tax on digital ads in March. New Zealand dropped its digital services tax in May. Canada, which had imposed a 3 percent tax on Meta and other tech giants, rescinded its tax in June as part of trade negotiations with the United States. Italy may follow.
caseynewton.mastodon.social.ap.brid.gy
Can Musk sue his way to the top of the App Store? I asked Stanford Law Prof. Mark Lemley about Musk's new lawsuit against OpenAI and Apple https://www.platformer.news/musk-sues-apple-openai-grok/
“To the contrary, it says Grok and X are not only on the store, but are among the very top apps in their fields,” Lemley told me. “X's complaint seems to be that Apple's own subjective listing of ‘must-have apps’ doesn't rank them highly enough. But there is no antitrust right to have Apple go out of its way to recommend your app.”
caseynewton.mastodon.social.ap.brid.gy
It's Hard Fork Friday! This week: the AI bubble, Jeff Horwitz on Meta's child-romancing chatbots, and a filthy song that is NOT the no. 1 country song in America right now https://www.nytimes.com/2025/08/22/podcasts/is-this-an-ai-bubble-metas-missing-morals-tiktok-shock-slop.html
caseynewton.mastodon.social.ap.brid.gy
It's my annual productivity post! What I'm still doing, what I stopped doing, and what I'm trying. Plus, the mundane and non-transformational ways I am using AI https://www.platformer.news/productivity-tools-ai-2025/
Thinking models have gotten surprisingly good at identifying potential sources — potentially academic ones. When writing about Grok last month, I wanted to talk to someone who had studied relationships between people and chatbots. ChatGPT led me to Harvard's Center for Digital Thriving, and suggested someone to talk to, along with their email address. I wound up interviewing them for the piece. The fact that thinking models can quickly analyze the academic literature about any subject and identify prominent researchers on the subject, along with their email addresses and phone numbers, is beginning to save me a lot of Googling.
caseynewton.mastodon.social.ap.brid.gy
I wrote about the seeming contradiction in two new survey findings: 71 percent of Americans fear AI will take their job; and 95 percent of AI tests at companies are failing to generate money. https://www.platformer.news/ai-job-loss-surveys-mit/
Americans fear AI will take their jobs — but companies can’t quite get it to work
What new surveys tell us about the messy state of AI diffusion
www.platformer.news
caseynewton.mastodon.social.ap.brid.gy
Chatbots can provide something *like* therapy, but it's not the genuine article — so I think it's good more states are cracking down on how companies present them https://www.platformer.news/ai-therapy-paxton-meta-character/
A Stanford University study, covered in Ars Technica last month, found that therapist-branded chatbots from Character.AI and other providers can encourage delusional thinking and express stigma toward people with certain mental health conditions. But one of its co-authors, Nick Haber, argued that AI likely does have positive applications to therapy, including in training human therapists and in helping clients with journaling and coaching.

That strikes me as true — and still not quite enough. Part of the problem here surely relates to language: the words "therapy" and "therapist" connote a level of trust and care that no automated system can provide. Tools like ChatGPT can clearly provide a convincing therapy-like experience — even one that has therapeutic benefits — but should never be mistaken for the genuine article.
caseynewton.mastodon.social.ap.brid.gy
This evening, I joined a small group of reporters for a wide-ranging, on-the-record dinner with Sam Altman and some of his top lieutenants. And yeah, Altman said — we're in an AI bubble https://www.platformer.news/sam-altman-gpt-5-interview-lightcap-turley/
Asked whether we're in an AI bubble, Altman said: "the answer is yes." Altman said he still believes that AI will produce massive returns to the economy. But "investors are over-excited as a whole," he said. Referring to a theoretical startup with a $750 million valuation that is just "three people and an idea," he said: "someone's gonna get burned."

AI will produce big winners and big losers, he said. "Some of our competitors will flame out," he said. "And some will do pretty well." Across the whole landscape, though: "Someone is gonna lose a phenomenal amount of money."
caseynewton.mastodon.social.ap.brid.gy
Note to Platformer subscribers: this week's final edition will be going out *Friday* to accommodate some reporting this evening.