Andrew White 🐦‍⬛
@andrew.diffuse.one
2.9K followers 230 following 90 posts
Head of Sci/cofounder at futurehouse.org. Prof of chem eng at UofR (on sabbatical). Automating science with AI and robots in biology. Corvid enthusiast
Posts Media Videos Starter Packs
andrew.diffuse.one
So we probably won't be getting a direct simulation of a whole virtual cell at meaningful timescales any time soon. Oh, and it would require 20x current earth power generation. 3/3

Read the analysis/blog post here: diffuse.one/p/d1-009
diffuse.one
andrew white's blog.
diffuse.one
andrew.diffuse.one
It sounds insane, but remember there are 10^14 atoms in a human cell and 10^20 femtoseconds in a day. And across multiple simulation engines, it requires 10^4 FLOPs per atom x femtosecond 2/3
andrew.diffuse.one
I finished my estimate on required compute to make an atomic-resolution virtual cell: 10^38 FLOPs to simulate a human cell for 1 day. We should be able to do this simulation in 2074 using 200 TW of power. 1/3
andrew.diffuse.one
yea, those are the model thoughts. It has a lot of mistakes in its thoughts. But you've got a very good eye! We'll make sure the final paper has a pristine example of its thoughts.
andrew.diffuse.one
Our ether0 paper was accepted at NeurIPS 2025! Very proud of the FutureHouse team!
andrew.diffuse.one
Very good point - I can re-run without that phrase.
andrew.diffuse.one
If you don't put phrase in quotes, it's an or. So it was

"α" equation

which is equivalent to "α" OR equation
andrew.diffuse.one
You can also look at it over time. Here's relatively popularity of different animal models in research over time.

Anyway, found this to be interesting. More details about it here: diffuse.one/p/d2-003 3/3
andrew.diffuse.one
Here's one measuring the frequency of sample sizes. Like how often people use 8 samples vs 12 samples for reporting research results. N=2 is apparently the most popular 2/3
andrew.diffuse.one
Google scholar has a full-text index of nearly all research papers. You can use it to get counts for arbitrary phrases. I've been using this to measure popularity of things in science. For example, here's the popularity of Greek letters used in equations 1/3
andrew.diffuse.one
I've written up some thoughts on publishing for machines. 10M research papers are published per year and there are 227M total - machines will be primary producers and readers of publications going forward. Humans can simply not keep up. It's time to think about revising the scientific paper.
andrew.diffuse.one
We make evals at FutureHouse. It’s hard and it sucks. It’s also now the bottleneck, as we scratch the boundary of human ability. HLE was a huge effort and made many good questions and we hope this analysis stimulates review of the other HLE categories and improvements 7/7
andrew.diffuse.one
We reviewed 150 of the questions in the chem and bio and found about 30% have peer-reviewed papers contradicting their ground-truth answers. Issues include confusion of species with orders, misreading of FDA guidelines, etc. All our notes are public. 5/7
andrew.diffuse.one
The HLE rubric wanted questions to have “objectively correct, univocal” ground-truth answers. You can find multiple peer-reviewed papers that contradict the statement "Oganesson was the rarest noble gas in 2002 as a percentage of terrestrial matter" 4/7
andrew.diffuse.one
It’s a clever question. But it’s not really about frontier science. Multiple papers have shown that Oganesson is not a gas (it’s predicted to be semiconducting solid), it’s not noble (it’s reactive), and it isn’t included in any "terrestrial matter" tables of noble gases. 3/7
andrew.diffuse.one
The design process of HLE required the questions to be unanswerable by contemporary LLMs. That lead to many gotcha style questions like the one below. It’s a trick question – in 2002, a few atoms of a group 18 element Oganesson were made for a few milliseconds. 2/7
andrew.diffuse.one
HLE has recently become the benchmark to beat for frontier agents. We at FutureHouse took a closer look at the chem and bio questions and found about 30% of them are likely invalid based on our analysis and third-party PhD evaluations. 1/7
andrew.diffuse.one
I just noticed it has sound lol. It's amazing
Reposted by Andrew White 🐦‍⬛
alignbio.bsky.social
1/4
🚀 Announcing the 2025 Protein Engineering Tournament.

This year’s challenge: design PETase enzymes, which degrade the type of plastic in bottles. Can AI-guided protein design help solve the climate crisis? Let’s find out! ⬇️

#AIforBiology #ClimateTech #ProteinEngineering #OpenScience
andrew.diffuse.one
I have written up a 3.5k word/10 figure essay on how to write a reward function while avoiding reward hacking for chemistry. It covers all the ridiculous ways we had to avoid reward hacking for training ether0, our scientific reasoning model.

diffuse.one/p/m1-000
andrew.diffuse.one
Although the discovery here is exciting, we are not claiming that we have cured dry AMD. Fully validating this hypothesis as a treatment for dry AMD will take human trials, which will take much longer.

Blog: www.futurehouse.org/research-ann...
Paper: arxiv.org/abs/2505.13400
Demonstrating end-to-end scientific discovery with Robin: a multi-agent system | FutureHouse
www.futurehouse.org
andrew.diffuse.one
The code for this is really minimal - similar to Google Co-Scientist we used multiple agents (from our platform in this case) and tournament-style rankings to select ideas. We're open sourcing it next week, along with all the trajectories.