David Manheim
davidmanheim.alter.org.il
David Manheim
@davidmanheim.alter.org.il
Humanity's future can be amazing - let's make sure it is.

Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.
Lastly, there's an argument that biotech in general can be dangerous because of pathogen applications, and advances in AIxBio can be worrying because of this.

True - but not doing safe things because very different ideas in another area of bio are scary is a bad objection!
(fin)
December 14, 2025 at 8:23 AM
(continued) to think a given new tech is on-net dangerous. Self-replication or an exponential trajectory could qualify, but this isn't that.

It seems far more likely that this enables cool and largely safe nanotech applications.
December 14, 2025 at 8:23 AM
And third, general advances in biodesign are terrifying!

And yes, all new technology can be misused, and that's certainly true for bio. But out (strong) prior should be that very little of the usage of new technology is for dangerous misuse which means that we need some pretty strong reason...
December 14, 2025 at 8:23 AM
Second, does this allow creating more robust biological systems? (viruses, bacteria, etc.)

Nope. We're years away from having enough understanding of pathogens with AI to build novel things that work very differently. This other thread explains more:
bsky.app/profile/davi...
New RAND report on an important (and messy) question: When should we actually worry about AI being used to design a pathogen? What’s plausible now vs. near-term vs. later?
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.

Full report:
www.rand.org/pubs/researc...
December 14, 2025 at 8:22 AM
First, prions.

Prions are terrifying, but this work doesn't relate to them. It's easy to hear "hard to sterilize" and think "Prion" - but optimizing for elongated β strands, or doing any other structural modification using similar methods, doesn't make for misfolding.
December 14, 2025 at 8:21 AM
Definitely worth a reminder, even though I should certainly know to do so!
October 23, 2025 at 6:21 PM
And I find myself strongly agreeing with almost everything Emma Ruttkamp-Bloem is saying in her #AIES2025 keynote about the future of AI ethics - despite continuing to worry that the vision for how to make AI systems more ethical does not sufficiently address future risks.
October 22, 2025 at 11:32 AM
Many things are explainable, but not understood by us. You were asserting that LLMs are understood, not that they are theoretically understandable.
October 13, 2025 at 12:14 PM
Good to see you at least read the abstract.

Now try the paper, especially about how behaviors of attention heads in a model evolve with respect to the training data distribution, and then tell me that language models are directly coded again.
October 5, 2025 at 6:21 PM
...what makes someone qualify as a Zionist, in your view?

Because if you mean supporting genocide and/or ethnic cleansing in Gaza, sure, that's obviously horrific. But if you mean wanting a 2-state solution instead of wanting all of Israel wiped off the map, I'm much more concerned.
October 5, 2025 at 6:19 PM
I'm very unsure if you're shockingly ignorant for someone so confident, or shockingly confident for someone so ignorant, but I'm not sure it matters.

Anyways, here's an expert obviously disagreeing with you for you to ignore: arxiv.org/abs/2504.18274
Structural Inference: Interpreting Small Language Models with Susceptibilities
We develop a linear response framework for interpretability that treats a neural network as a Bayesian statistical mechanical system. A small perturbation of the data distribution, for example shiftin...
arxiv.org
October 5, 2025 at 6:15 PM
Bottom line: Treat AI-enabled bio risk as rising but still governable, and aim for clearer threat models, empirical monitoring, and adaptive policies.
(12/12)

And if you don't want to read the 100+ page report, read the 6-page @rand.org brief for details:
www.rand.org/pubs/researc...
When Should We Worry About AI Being Used to Design a Pathogen?
Concerns that artificial intelligence (AI) might enable pathogen design are increasing, but risks and timelines remain unclear. This brief describes a Delphi study of biology and AI experts who debate...
www.rand.org
October 5, 2025 at 3:54 PM
And back to the risk, norms matter, but aren’t enough: self-governance (reviews, responsible disclosure) helps, yet can’t reliably constrain determined actors or novel misuse. We’ll need coordinated regulatory and institutional guardrails, though they don't need to be intrusive. (11/12)
October 5, 2025 at 3:53 PM
But even with all the risk, we can invest in pandemic readiness that pays off regardless of origin—rapid diagnostics, scalable vaccines, surge capacity. These reduce incentives and impact even if controls are bypassed.

ASB's recent interview discussed this:
www.youtube.com/watch?v=pnfT...
(10/12)
AI-designed diseases are coming. Here's the defence plan. | Andrew Snyder-Beattie
YouTube video by 80,000 Hours
www.youtube.com
October 5, 2025 at 3:52 PM
And to prepare, there are some concrete safeguards to build:
– Strengthen global gene-synthesis screening.
– Add identity checks, experiment pre-screens, and audit trails for cloud/automated labs.
– Improve data governance for genomic/experimental datasets (quality + access control).
(9/12)
October 5, 2025 at 3:51 PM
Policy implications (pragmatic):
– Focus mitigations on plausible, actionable risks and misuse pathways now.
– Risk will increase if barriers fall, so we should monitor four vectors: clinical bioengineering, lab automation, high-fidelity simulations, and generally capable AI.
(8/12)
October 5, 2025 at 3:50 PM
Where expert views diverge: speed of capability gains. Some expect steady, marginal increases; others worry about threshold effects where capabilities jump quickly. Both agree monitoring is essential soon, but there is lots of genuine uncertainty about timelines. (7/12)
October 5, 2025 at 3:50 PM
Biology still pushes back: transmissibility has physical/biological ceilings; environmental stability trades off with other fitness traits, etc. Many constraints interact, so the limits, which we explain at length in the report, need to be understood in concert. (6/12)
October 5, 2025 at 3:50 PM
What AI helps with today: pattern-finding, hypothesis generation, identifying gene targets, and speeding iterative design.

Today, these are force multipliers for sophisticated or state actors, not push-button bioweapons - but no fundamental limits to that happening in the future were found.
(5/12)
October 5, 2025 at 3:50 PM