Vikram Venkatram
@vikramvenkatram.bsky.social
190 followers 370 following 38 posts
Research Analyst at @CSETGeorgetown on the Biotechnology team. Georgetown Center for Security Studies and Georgetown School of Foreign Service alum.
Posts Media Videos Starter Packs
vikramvenkatram.bsky.social
Focusing on bio, one provision is a federal funding requirement for DNA synthesis screening- a useful tool in the toolbox for limiting biological risk.

Check out @stephbatalis.bsky.social and I's piece breaking down the kind of decisions screeners have to make: thebulletin.org/2025/04/how-...
How to stop bioterrorists from buying dangerous DNA
The companies that sell synthesized DNA to scientists need to screen their customers, lest dangerous sequences for pathogens or toxins fall into the wrong hands.
thebulletin.org
vikramvenkatram.bsky.social
More on the recent AI Action Plan! @csetgeorgetown.bsky.social work is very relevant.
vikramvenkatram.bsky.social
Ultimately, though, a chilling effect on state-driven AI legislation could severely harm innovation by reducing foundational AI governance infrastructure.

The Action Plan's implementation and approach remain to be seen, but it should be careful not to nip useful state regulation in the bud.
vikramvenkatram.bsky.social
The plan does clarify that restrictions shouldn't interfere with prudent state laws that don't harm innovation.
And it's true that a complex thicket of onerous state laws governing AI could make it harder for AI companies to comply, harming innovation.
vikramvenkatram.bsky.social
States are better-positioned to pass these laws than the federal government in the current environment.

They can also serve as a sandbox for experimentation and debate, allowing for innovation in governance approaches. The best governance approaches can inspire other states to follow suit.
vikramvenkatram.bsky.social
State laws provide a critical avenue for building governance infrastructure: things like workforce capacity, information-sharing regimes, standardized protocols, incident reporting, etc.

These help provide clarity for companies and are crucial for innovation.
vikramvenkatram.bsky.social
A recent @thehill.com piece by @minanrn.bsky.social, @jessicaji.bsky.social, and myself introduces the topic of governance infrastructure.

It discusses the recent proposed ban on state AI regulation-which would have gone much further and, thankfully, did not pass.

thehill.com/opinion/tech...
thehill.com
vikramvenkatram.bsky.social
Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.
vikramvenkatram.bsky.social
Factors like robust third-party auditing, strong information-sharing incentives, and shared resources and workforce development enhance, rather than reduce, innovation.

As such, we argue that the proposed moratorium would be counterproductive, undermining the very goals it aims to achieve.
vikramvenkatram.bsky.social
These debates are worth having, but miss a crucial factor: AI governance infrastructure, which states are best-positioned to build.

This infrastructure helps achieve the moratorium's stated goals. It helps developers innovate, strengthens consumer trust, and preserves U.S. national security.
vikramvenkatram.bsky.social
Proponents of this plan argue that reducing strenuous regulations will speed up innovation, and that the federal government should lead in regulating AI anyway.

Opponents cite congressional gridlock, partisanship, and lack of meaningful tech regulation, as proof state laws are needed.
vikramvenkatram.bsky.social
The recent reconciliation bill, which passed the House and will face a Senate vote soon, would place a 10-year moratorium on state-level AI regulation.

Whether this is a good idea has been hotly debated.
vikramvenkatram.bsky.social
Banning state-level AI regulation is a bad idea!

One crucial reason is that states play a critical role in building AI governance infrastructure.

Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!

thehill.com/opinion/tech...
thehill.com
vikramvenkatram.bsky.social
It's heartbreaking to see people dying from preventable disease.

AMR is a global problem, and people die from it everywhere. But as with many other problems, it affects the poor most harshly.

As a global community, we must fund more AMR research, and find ways to get drugs to those in need.
vikramvenkatram.bsky.social
AMR is a multi-pronged issue. Accessibility (ensuring that all people who need antimicrobial drugs can use them), stewardship (ensuring the proper prescription and use of the drugs), and R&D (developing new drugs to fix a thin global pipeline of new ones) are all key.
vikramvenkatram.bsky.social
The study focuses on Carbapenem-resistant Gram-negative bacterial infections in 2019, finding that in the 8 LMICs analyzed, only 6-9% of infections were treated properly.

These are treatable infections, but with the lack of access to the right antibiotics, they kill.
vikramvenkatram.bsky.social
Antimicrobial resistance is a huge issue and an oft-forgotten killer. It kills more people each year than HIV/AIDS or malaria.

This article is fascinating- it points out that while much of the AMR prevention discussion focuses on overuse of antimicrobials, underuse can also be a major issue.
Reposted by Vikram Venkatram
stephbatalis.bsky.social
"Red-teaming" isn't a catch-all term (or methodology!) to evaluate AI safety. So, what else do we have in the toolbox?

In our recent blog post, we explore the different questions we can ask about safety, how we can start to measure them, and what it means for AIxBio. Check it out! ⬇️
csetgeorgetown.bsky.social
⚖️ New Explainer!

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?

In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
vikramvenkatram.bsky.social
AI safety evaluations fall into two fundamental categories: model safety evals and contextual safety evals.

The former evaluate just the model's output, in a vacuum. The latter test how models perform in a real-world context or use case.
vikramvenkatram.bsky.social
Looking to understand how safety evals work, how different evals differ, and what they do and don't tell us?

Check out this new @csetgeorgetown.bsky.social blog post by @jessicaji.bsky.social, @stephbatalis.bsky.social, and myself breaking down different types of AI safety evaluations!
vikramvenkatram.bsky.social
Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?

There's no perfect method, but safety evaluations are the best tool we have.

That said, different evals answer different questions about a model!
csetgeorgetown.bsky.social
⚖️ New Explainer!

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?

In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.