Visiting lecturer at the Technion, founder https://alter.org.il, Superforecaster, Pardee RAND graduate.
True - but not doing safe things because very different ideas in another area of bio are scary is a bad objection!
(fin)
True - but not doing safe things because very different ideas in another area of bio are scary is a bad objection!
(fin)
It seems far more likely that this enables cool and largely safe nanotech applications.
It seems far more likely that this enables cool and largely safe nanotech applications.
And yes, all new technology can be misused, and that's certainly true for bio. But out (strong) prior should be that very little of the usage of new technology is for dangerous misuse which means that we need some pretty strong reason...
And yes, all new technology can be misused, and that's certainly true for bio. But out (strong) prior should be that very little of the usage of new technology is for dangerous misuse which means that we need some pretty strong reason...
Nope. We're years away from having enough understanding of pathogens with AI to build novel things that work very differently. This other thread explains more:
bsky.app/profile/davi...
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.
Full report:
www.rand.org/pubs/researc...
Nope. We're years away from having enough understanding of pathogens with AI to build novel things that work very differently. This other thread explains more:
bsky.app/profile/davi...
Prions are terrifying, but this work doesn't relate to them. It's easy to hear "hard to sterilize" and think "Prion" - but optimizing for elongated β strands, or doing any other structural modification using similar methods, doesn't make for misfolding.
Prions are terrifying, but this work doesn't relate to them. It's easy to hear "hard to sterilize" and think "Prion" - but optimizing for elongated β strands, or doing any other structural modification using similar methods, doesn't make for misfolding.
Now try the paper, especially about how behaviors of attention heads in a model evolve with respect to the training data distribution, and then tell me that language models are directly coded again.
Now try the paper, especially about how behaviors of attention heads in a model evolve with respect to the training data distribution, and then tell me that language models are directly coded again.
Because if you mean supporting genocide and/or ethnic cleansing in Gaza, sure, that's obviously horrific. But if you mean wanting a 2-state solution instead of wanting all of Israel wiped off the map, I'm much more concerned.
Because if you mean supporting genocide and/or ethnic cleansing in Gaza, sure, that's obviously horrific. But if you mean wanting a 2-state solution instead of wanting all of Israel wiped off the map, I'm much more concerned.
Anyways, here's an expert obviously disagreeing with you for you to ignore: arxiv.org/abs/2504.18274
Anyways, here's an expert obviously disagreeing with you for you to ignore: arxiv.org/abs/2504.18274
(12/12)
And if you don't want to read the 100+ page report, read the 6-page @rand.org brief for details:
www.rand.org/pubs/researc...
(12/12)
And if you don't want to read the 100+ page report, read the 6-page @rand.org brief for details:
www.rand.org/pubs/researc...
ASB's recent interview discussed this:
www.youtube.com/watch?v=pnfT...
(10/12)
ASB's recent interview discussed this:
www.youtube.com/watch?v=pnfT...
(10/12)
– Strengthen global gene-synthesis screening.
– Add identity checks, experiment pre-screens, and audit trails for cloud/automated labs.
– Improve data governance for genomic/experimental datasets (quality + access control).
(9/12)
– Strengthen global gene-synthesis screening.
– Add identity checks, experiment pre-screens, and audit trails for cloud/automated labs.
– Improve data governance for genomic/experimental datasets (quality + access control).
(9/12)
– Focus mitigations on plausible, actionable risks and misuse pathways now.
– Risk will increase if barriers fall, so we should monitor four vectors: clinical bioengineering, lab automation, high-fidelity simulations, and generally capable AI.
(8/12)
– Focus mitigations on plausible, actionable risks and misuse pathways now.
– Risk will increase if barriers fall, so we should monitor four vectors: clinical bioengineering, lab automation, high-fidelity simulations, and generally capable AI.
(8/12)
Today, these are force multipliers for sophisticated or state actors, not push-button bioweapons - but no fundamental limits to that happening in the future were found.
(5/12)
Today, these are force multipliers for sophisticated or state actors, not push-button bioweapons - but no fundamental limits to that happening in the future were found.
(5/12)