Alexander Berger
@albrgr.bsky.social
1.7K followers 650 following 290 posts
CEO of Open Philanthropy
Posts Media Videos Starter Packs
Reposted by Alexander Berger
davidmanheim.alter.org.il
New RAND report on an important (and messy) question: When should we actually worry about AI being used to design a pathogen? What’s plausible now vs. near-term vs. later?
(1/12)
I helped convene two expert Delphi panels in AI + Bio to weigh in.

Full report:
www.rand.org/pubs/researc...
albrgr.bsky.social
If you are a funder interested in getting involved, get in touch - we would love to be a resource! We're increasingly working with other donors and we are eager to help donors find highly cost-effective opportunities.
albrgr.bsky.social
More resources are needed across these different theories of change.

Other reasons right now is leveraged: AI advancements have created better research tools, attracted researchers to the field, and increased policy opportunities.
albrgr.bsky.social
On building the field's capacity: scholarships, fellowships and educational initiatives like MATS and BlueDot Impact have built out impressive talent pipelines. MATS reports 80% of alumni are working on AI safety!
albrgr.bsky.social
On technical and policy safeguards: Redwood Research's work on loss-of-control scenarios, Theorem's work on developing formal verification methods, and several think tanks' work on technical AI governance show how progress is possible.
albrgr.bsky.social
The rest of the post describes experience from our ~10y in this space which show philanthropy can move the needle.

On visibility into frontier AI R&D: we've supported benchmarks like Percy Liang's CyBench, public data work from @epochai.bsky.social, and research from @csetgeorgetown.bsky.social
albrgr.bsky.social
The upshot is that other donors come to us for advice, we can recommend funding opportunities that we believe are *2-5x more cost-effective* as the marginal grants we make Good Ventures' funding.
albrgr.bsky.social
There are four key reasons other funders are needed:

(1) There are highly cost-effective grants not in Good Ventures' scope
(2) AI policy needs a diverse funding base
(3) Other orgs can make bets we're missing
(4) Generally, AI safety and security is still underfunded!
albrgr.bsky.social
To begin: AI is rapidly advancing, which gives funders a narrow window to make a leveraged difference.
albrgr.bsky.social
People sometimes assume that Open Phil “has it covered” on philanthropy for AI safety & security. That’s not right: some great opportunities really need other funders. Liz Givens and I make the case for why (and why now) in the final post of our series.
www.openphilanthropy.org/research/ai...
albrgr.bsky.social
Despite its importance and increasing salience, there are still relatively few funders in this space. Tomorrow we’ll post Part 3, making the case for why now is an especially high-leverage time for more philanthropists to get involved.
albrgr.bsky.social
The third is capacity: we aim to grow and strengthen the fields of research and practice responding to these challenges. This includes support for fellowship programs, career development, conferences, and educational initiatives.
albrgr.bsky.social
The second is designing and implementing technological and policy safeguards. This includes both technical AI safety & security and a range of AI governance work:
albrgr.bsky.social
3 prongs to our grantmaking approach in practice.

The first is increasing visibility into cutting-edge AI R&D, with the goal of better understanding AI’s capabilities and risks. This includes supporting AI model evals, threat modeling, and building public understanding.
albrgr.bsky.social
Today, we've scaled our work on AI safety and security significantly. Our work on risks focuses on worst-cases, but we aim to strike a number of important balances:
albrgr.bsky.social
Ten years later, the landscape has changed drastically: AI is much more advanced and has risen hugely in geopolitical importance. There is greater empirical evidence and expert agreement about the catastrophic risks it could pose.
albrgr.bsky.social
The strategic landscape was very unclear when we first entered the field. As a result, we mostly funded early-stage research and field-building efforts to increase the number of people taking these questions seriously.
albrgr.bsky.social
Since 2015, seven years before the launch of ChatGPT, Open Phil has been funding efforts to address potential catastrophic risks from AI.

In a new post, Emily Oehlsen and I discuss our history in the area and our current strategy.
www.openphilanthropy.org/research/ou...
bsky.app/profile/alb...
albrgr.bsky.social
The next post, also co-authored with Emily Oehlsen, is on our approach to AI safety and security. It discusses our history in the area and our main grantmaking strategies for mitigating worst-case AI risks.

That will be out tomorrow, so look out for another long thread!
albrgr.bsky.social
But the vast majority of our work here is on market and policy failures around worst-case risks posed by AI.

We think these risks could be very grave, and that philanthropy is especially well-placed to contribute:
albrgr.bsky.social
The rest of the piece considers progress vs. safety in the context of AI, which we think is the most consequential technology currently being developed.

In recent years, our global health and abundance work has aimed to correct some market failures around the benefits of AI:
albrgr.bsky.social
In practice we look for pragmatic compromises. To address the chance that metascience work could increase catastrophic risks (e.g., better bio might make bioterrorism easier), we decided to target >=20% of that portfolio being net positive (but not optimized) for biosafety.