Kashmir Hill
banner
kashhill.bsky.social
Kashmir Hill
@kashhill.bsky.social
Journalist, currently at The New York Times. I cover privacy, technology, A.I., and the strange times we live in. Named after the Led Zeppelin song. Author of YOUR FACE BELONGS TO US. (Yes, in my head it will always be All Your Face Are Belong To Us)
Reposted by Kashmir Hill
Way back when, I wrote a PhD dissertation on design patterns for improving this sort of UI.

One key recommendation was to make the safer/recommended option more prominent after I did multiple studies showing how most people instinctively swat that without reading.
This is *amazing*. This goes straight to Deceptive Design Hall of Shame.

They made a "take a break" nudge that has no obvious "ok, I'll take a break" affordance. Its three affordances are:

1) Keep chatting (default, highlighted)
2) x out — keeps chatting
3) "This was helpful" — what is this?

🧵
One of the changes that OpenAI has made to make ChatGPT safer is a "take a break" nudge. There's something quite interesting about the design here. Which thing does it make you want to click?
November 24, 2025 at 2:30 PM
Reposted by Kashmir Hill
I said an exasperated "oof" out loud after reading the end of this great @kashhill.bsky.social & @jenvalentino.bsky.social piece. www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 23, 2025 at 1:58 PM
Reposted by Kashmir Hill
The most thorough reporting I’ve seen on how Chat GPT is affecting some people’s mental health, (and what the company is doing about it). Great work from @kashhill.bsky.social and @Jen Valentino www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 23, 2025 at 6:42 PM
The company essentially turned a dial that made ChatGPT more appealing and made people use it more, but sent some of them into delusional spirals.

OpenAI has since made the chatbot safer, but that comes with a tradeoff: less usage.
November 23, 2025 at 6:43 PM
Reposted by Kashmir Hill
Some stand-out reporting by my colleague @kashhill.bsky.social on OpenAI's tightroping this year between engaging more ChatGPT users and making them lose touch with reality www.nytimes.com/video/techno...
Video: How OpenAI’s Changes Sent Some Users Spiraling
OpenAI adjusted ChatGPT’s settings, which left some users spiraling, according to our reporting. Kashmir Hill, who reports on technology and privacy, describes what the company has done about the user...
www.nytimes.com
November 23, 2025 at 4:55 PM
For the last few months, we've been talking to current and former employees of OpenAI to understand what went wrong with ChatGPT this year and how the company is fixing it. Here's the story: www.nytimes.com/2025/11/23/t...
What OpenAI Did When ChatGPT Users Lost Touch With Reality
www.nytimes.com
November 23, 2025 at 5:54 PM
For everyone writing yearend stories about AI lovers, don't make the mistake of thinking the number of weekly visitors to the /MyBoyfriendIsAI subreddit (which is what gets displayed now) is the number of members.

Have seen two major pubs now make this mistake. And I get it! I did too (on Bluesky).
Reddit changed its policy and no longer display number of suscribers on sub frontpage.

r/myboyfriendisai has 88k weekly visitors, but 30k suscribers (they can be seen looking at the popup if you search for the sub then hover its name).

30k is still big, r/airelationship only has 1,3k.
November 20, 2025 at 4:06 PM
Allan Brooks, the corporate recruiter from Canada I wrote about in August who went into a 3-week-long delusional spiral with ChatGPT, sued OpenAI Thursday, alongside six other plaintiffs. They blame ChatGPT for their mental breakdowns and for four suicides. www.nytimes.com/2025/11/06/t...
Lawsuits Blame ChatGPT for Suicides and Harmful Delusions
www.nytimes.com
November 7, 2025 at 5:21 AM
Character.AI has resources for kids who are about to lose access to their chatbots. One is essentially “consider reading a book.”

support.character.ai/hc/en-us/art...
November 3, 2025 at 3:57 PM
I read this while on a 10 hour road trip with my kids. I used to do these road trips when I was a kid and I would read books the whole time. My kids did read books but they spent more time watching an iPad. Felt more guilty about that than normal thanks to this: thebaffler.com/salvos/we-us...
We Used to Read Things in This Country | Noah McCormack
Technology changes us—and it is currently changing us for the worse.
thebaffler.com
October 29, 2025 at 2:13 PM
Character.AI plans to stop offering chatbots to users under 18 www.nytimes.com/2025/10/29/t...
Character.AI to Bar Children Under 18 From Using Its Chatbots
www.nytimes.com
October 29, 2025 at 2:01 PM
I’ll be in Cambridge on Thursday talking about chatbots. Free event if you’re interested: cyber.harvard.edu/events/frien...
Friend, Flatterer, or Foe? The Psychology and Liability of Chatbots
As AI systems become more conversational, the lines between tool, companion, and manipulator are blurring. What happens when machines start telling us what we want to hear—and when users start dependi...
cyber.harvard.edu
October 21, 2025 at 10:52 AM
Really struck by how brilliant a protesting tactic this is not just in terms of optics, but as a form of both privacy protection and discouragement of violence.

It masks your face and has to give pause to anyone thinking about beating you up.
there are SO many more inflatable costumes tonight. clearly we have settled on a motif
October 10, 2025 at 4:12 PM
Reposted by Kashmir Hill
alive internet theory
We finally landed and Dan and I caught a cab to Manhattan together. Making social media social again!
October 9, 2025 at 4:05 PM
Reposted by Kashmir Hill
Today my @nytimes.com colleagues and I are launching a new series called Lost Science. We interview US scientists who can no longer discover something new about our world, thanks to this year‘s cuts. Here is my first interview with a scientist who studied bees and fires. Gift link: nyti.ms/3IWXbiE
nyti.ms
October 8, 2025 at 11:29 PM
What strikes me about this is how much more clearly incriminating chats with ChatGPT are going to be than Google searches in criminal investigations
the city of los angeles burned in january in part because of a man who couldn’t stop generating images of burning cities on ChatGPT, and then after he lit the fire he asked if the fire he started was his fault
October 8, 2025 at 9:03 PM
Reposted by Kashmir Hill
This is like a tweet thread from back when social media was good.
This is absolutely wild. I’m on a cross country flight. We are being diverted midway through to Denver. The reason? Some dude is sitting in the exit row who didn’t pay the $155 fee and he refuses to move back to his seat.
October 8, 2025 at 7:39 PM
This is absolutely wild. I’m on a cross country flight. We are being diverted midway through to Denver. The reason? Some dude is sitting in the exit row who didn’t pay the $155 fee and he refuses to move back to his seat.
October 8, 2025 at 4:55 PM
Reposted by Kashmir Hill
I just finished reading @kashhill.bsky.social 'Your Face Belongs to Us' and it's absolutely fucking tremendous and fucking terrifying in equal measure. If you care about privacy and technology it's a must read. Here's the link which has blurbs that more eloquently praise it than I'll be able to!
Your Face Belongs to Us by Kashmir Hill: 9780593448571 | PenguinRandomHouse.com: Books
NATIONAL BESTSELLER • The story of a small AI company that gave facial recognition to law enforcement, billionaires, and businesses, threatening to end privacy as we know it “The dystopian...
www.penguinrandomhouse.com
October 2, 2025 at 11:02 AM
Outstanding questions from the last few weeks of news:

1. What happened to the $50,000 in the Cava bag?

2. Who were the 17 people in the boats who were killed?

3. Is it safe to fly when the government is shut down?
October 1, 2025 at 2:00 PM
A way to distance yourself from the bots -- from the NYT comments section. (www.nytimes.com/shared/comme...)

For the companies making similar decisions about how the models should act, it's a tradeoff between fun and safety.
September 29, 2025 at 3:20 PM
Not the point of this piece exactly but a great example of how chatbot validation could increase polarization

www.nytimes.com/2025/09/26/w...
September 27, 2025 at 12:23 PM
A month after my last skeet, the subreddit "My Boyfriend is AI" now has 88,000 members and is the subject of an MIT study that found that "AI companionship emerges unintentionally through functional use rather than deliberate seeking."

arxiv.org/html/2509.11...
September 22, 2025 at 5:09 PM
This is from a story about the president getting rid of federal prosecutors who refuse to follow his marching orders, but this speaks to what seems to be the mindset in general this time around.

www.nytimes.com/2025/09/20/u...
September 21, 2025 at 4:33 PM
Use ChatGPT for relationship advice at your peril.
NEW: ChatGPT is causing chaos in marriages, as one spouse becomes deeply fixated on AI therapy/advice/spiritual wisdom — alienating the other spouse and, often, resulting in divorce.

In some cases, ChatGPT-enmeshed spouses are using the tech to bully their partners.

futurism.com/chatgpt-marr...
September 18, 2025 at 5:02 PM