Helen Toner
@hlntnr.bsky.social
1K followers 590 following 51 posts
AI, national security, China. Part of the founding team at @csetgeorgetown.bsky.social‬ (opinions my own). Author of Rising Tide on substack: helentoner.substack.com
Posts Media Videos Starter Packs
hlntnr.bsky.social
I honestly don't know how big the potential harms of personalization are—I think it's possible we end up coping fine. But it's crazy to me how little mindshare this seems to be getting among people who think about unintended systemic effects of AI for a living.
hlntnr.bsky.social
Thinking about this, I keep coming back to two stories—

1) how FB allegedly trained models to identify moments when users felt worthless, then sold that data to advertisers

2) how we're already seeing chatbots addicting kids & adults and warping their sense of what's real
hlntnr.bsky.social
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:
Reposted by Helen Toner
mbogen.bsky.social
AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well.

In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes
Personalized AI is rerunning the worst part of social media's playbook
The incentives, risks, and complications of AI that knows you
open.substack.com
hlntnr.bsky.social
The 3 disagreements are:
How far can the current paradigm go?
How much can AI improve AI?
Will future AI still basically be tools, or will they be something else?

Thanks to
@farairesearch
for the invitation to do this talk! Transcript here:
helentoner.substack.com/p/unresolved...
Unresolved debates about the future of AI
How far the current paradigm can go, AI improving AI, and whether thinking of AI as a tool will keep making sense
helentoner.substack.com
hlntnr.bsky.social
Been thinking recently about how central "AI is just a tool" is to disagreements about the future of AI. Is it? Will it continue to be?

Just posted a transcript from a talk where I go into this + a couple other key open qs/disagreements (not p(doom)!).

🔗 below, preview here:
hlntnr.bsky.social
2 weeks left on this open funding call on risks from internal deployments of frontier AI models—submissions are due June 30.

Expressions of interest only need to be 1-2 pages, so still time to write one up!

Full details: cset.georgetown.edu/wp-content/u...
hlntnr.bsky.social
💡Funding opportunity—share with your AI research networks💡

Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.

Full details ➡️ cset.georgetown.edu/wp-content/u...

Summary ⬇️
hlntnr.bsky.social
💡Funding opportunity—share with your AI research networks💡

Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.

Full details ➡️ cset.georgetown.edu/wp-content/u...

Summary ⬇️
hlntnr.bsky.social
...But too many critics of those stasist ideas try to shove the underlying problems under the rug. With this post, I"m trying to help us hold both things at once.

Read the full post on AI Frontiers: www.ai-frontiers.org/articles/wer...
Or my substack: helentoner.substack.com/p/dynamism-v...
We’re Arguing About AI Safety Wrong | AI Frontiers
Helen Toner, May 12, 2025 — Dynamism vs. stasis is a clearer lens for criticizing controversial AI safety prescriptions.
www.ai-frontiers.org
hlntnr.bsky.social
From The Future and Its Enemies by Virginia Postrel:
Dynamism: "a world of constant creation, discovery, and competition"
Stasis: "a regulated, engineered world... [that values] stability and control"

Too many AI safety policy ideas would push us toward stasis. But...
hlntnr.bsky.social
Criticizing the AI safety community as anti-tech or anti-risktaking has always seemed off to me. But there *is* plenty to critique. My latest on Rising Tide (xposted with @aifrontiers.bsky.social!) is on the 1998 book that helped me put it into words.

In short: it's about dynamism vs stasis.
hlntnr.bsky.social
New on Rising Tide, I break down 2 factors that will play a huge role in how much AI progress we see over the next couple years: verification & generalization.

How well these go will determine if AI just gets super good at math & coding vs. mastering many domains. Post excerpts:
hlntnr.bsky.social
Cognitive Revolution (🇺🇸): More insidery chat with @nathanlabenz.bsky.social getting into why nonproliferation is the wrong way to manage AI misuse; AI in military decision support systems, and a bunch of other stuff.

Clip on my beef with talk about the "offense-defense" balance in AI:
hlntnr.bsky.social
Stop the World (🇦🇺): Fun, wide-ranging conversation with David Wroe of @aspi-org.bsky.social on where we're at with AI, reasoning models, DeepSeek, scaling laws, etc etc.

Excerpt on whether we can "just" keep scaling language models:
hlntnr.bsky.social
2 new podcast interviews out in the last couple weeks—one for more of a general audience, one more inside baseball.

You can also pick your accent (I'm from Australia and sound that way when I talk to other Aussies, but mostly in professional settings I sound ~American)
hlntnr.bsky.social
cc @binarybits.bsky.social re hardening the physical world,
@vitalik.ca re d/acc, @howard.fm re power concentration... plus many others I'm forgetting whose takes helped inspire this post. I hope this is a helpful framing for these tough tradeoffs.
hlntnr.bsky.social
I don't think this approach will obviously be enough, I don't think it's equivalent to "just open source everything YOLO," and I don't think any of my argument applies to tracking/managing the frontier or loss of control risks.

More in the full piece: helentoner.substack.com/p/nonprolife...
Nonproliferation is the wrong approach to AI misuse
Making the most of “adaptation buffers” is a more realistic and less authoritarian strategy
helentoner.substack.com
hlntnr.bsky.social
What to do instead? IMO the best option is to think in terms of "adaptation buffers," the gap between when we know a new misusable capability is coming and when it's actually widespread.

During that time, we need massive efforts to build as much societal resilience as we can.