Mina Narayanan
@minanrn.bsky.social
170 followers 280 following 24 posts
Research Analyst @CSETGeorgetown | AI governance and safety | Views my own
Posts Media Videos Starter Packs
minanrn.bsky.social
In other words, Congress is still in the early days of governing AI but so far seems more focused on understanding and harnessing AI’s potential than addressing its downsides. Make sure to take a deeper dive into our analysis here 🧵6/6 eto.tech/blog/ai-laws...
Exploring AI legislation in Congress with AGORA: Risks, Harms, and Governance Strategies – Emerging Technology Observatory
Using AGORA to explore AI legislation enacted by U.S. Congress since 2020
eto.tech
minanrn.bsky.social
Fewer legislative docs directly tackle risks or undesirable consequences from AI (such as harm to infrastructure) than propose strategies such as government support, convening, or institution-building 🧵5/6
minanrn.bsky.social
Very few enactments leverage performance requirements, pilots, new institutions, or other governance strategies that place concrete requirements on AI systems or represent investments in maturing or scaling up AI capabilities 🧵4/6
minanrn.bsky.social
Most of Congress’s 147 enactments focus on commissioning studies of AI systems, assessing their impacts, providing support for AI-related activities, convening stakeholders, & developing additional AI-related governance docs 🧵3/6
minanrn.bsky.social
We find that Congress has enacted many AI-related laws & provisions which are focused more on laying the groundwork to harness AI’s potential – often in nat'l sec contexts – than placing concrete demands on AI or directly tackling their specific, undesirable consequences 🧵2/6
minanrn.bsky.social
Check out the second @csetgeorgetown.bsky.social @emergingtechobs.bsky.social blog from @sonali-sr.bsky.social and myself where we explore the strategies, risks, and harms addressed by AI-related laws enacted by Congress between Jan 2020 and March 2025 🧵1/6 eto.tech/blog/ai-laws...
minanrn.bsky.social
Shared some thoughts on the AI Action Plan's recs around shaping state-level AI activity last week -- essentially, the plan's attempt to pressure states to abandon AI restrictions risks hurting U.S. national security www.defenseone.com/technology/2...
How the White House AI plan helps, and hurts, in the race against China
While one tech advocate called the new plan “a critical component” of efforts to outpace China, another criticized it as a “Silicon Valley wishlist.”
www.defenseone.com
Reposted by Mina Narayanan
vikramvenkatram.bsky.social
Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.
minanrn.bsky.social
Stay tuned for the second blog, which examines the governance strategies, risk-related concepts, and harms covered by this legislation! 🧵3/3
minanrn.bsky.social
We find that, contrary to conventional wisdom, Congress has enacted many AI-related laws and provisions — most of which apply to military and public safety contexts 🧵2/3
minanrn.bsky.social
Check out the first blog in a 2 part series from @sonali-sr.bsky.social and myself where we use data from @csetgeorgetown.bsky.social @emergingtechobs.bsky.social AGORA to explore ✨AI-related legislation that was enacted by Congress between January 2020 and March 2025✨
eto.tech/blog/ai-laws... 🧵1/3
minanrn.bsky.social
Check out the latest AGORA roundup from @emergingtechobs.bsky.social , which highlights some overlooked AI provisions in the Big Beautiful Bill!
emergingtechobs.bsky.social
✨ The AI moratorium has been struck down ⚡ but what else does the Big Beautiful Bill have to say about AI? Check out the latest AGORA update 📷 to learn about the provisions on border security, Medicare, and more! Link in thread 🧵👇
minanrn.bsky.social
The 10 yr moratorium on state AI laws will hurt U.S. nat'l security & innovation if enacted. In our piece in @thehill.com , @jessicaji.bsky.social , @vikramvenkatram.bsky.social , & I argue that states support the very infrastructure needed for a vibrant U.S. AI ecosystem
thehill.com/opinion/tech...
thehill.com
Reposted by Mina Narayanan
vikramvenkatram.bsky.social
Banning state-level AI regulation is a bad idea!

One crucial reason is that states play a critical role in building AI governance infrastructure.

Check out this new op-ed by @jessicaji.bsky.social, myself, and @minanrn.bsky.social on this topic!

thehill.com/opinion/tech...
thehill.com
Reposted by Mina Narayanan
vikramvenkatram.bsky.social
Amidst all the discussion about AI safety, how exactly do we figure out whether a model is safe?

There's no perfect method, but safety evaluations are the best tool we have.

That said, different evals answer different questions about a model!
csetgeorgetown.bsky.social
⚖️ New Explainer!

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?

In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
minanrn.bsky.social
@ifp.bsky.social recently published a searchable database of all AI Action Plan submissions, many of which cover topics that overlap with CSET's submission! Check out CSET's recs here: cset.georgetown.edu/publication/... and compare it to others here: www.aiactionplan.org
AI Action Plan Database
A database of recommendations for OSTP's AI Action Plan.
www.aiactionplan.org
Reposted by Mina Narayanan
Reposted by Mina Narayanan
csetgeorgetown.bsky.social
What does the EU's shifting strategy mean for AI?

CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.

Read it now 👇
miahoffmann.bsky.social
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
Reposted by Mina Narayanan
timrudner.bsky.social
Check out our paper on the quality of interpretability evaluations of recommender systems:

cset.georgetown.edu/publication/...

Led by @minanrn.bsky.social and Christian Schoeberl!

@csetgeorgetown.bsky.social
minanrn.bsky.social
[6/6] Our findings suggest the importance of standards for AI evaluations and a capable workforce to assess the efficacy of these evaluations. If researchers understand & measure facets of AI trustworthiness differently, policies for building trusted AI systems may not work
minanrn.bsky.social
[5/6] Here are the evaluation approaches we identified:
minanrn.bsky.social
[4/6] We find that research papers (1) do not clearly differentiate explainability from interpretability, (2) contain combinations of five evaluation approaches, & (3) more often test if systems are built according to design criteria than if systems work in the real world
minanrn.bsky.social
[3/6] Our new report examines how researchers evaluate claims about the explainability & interpretability of recommender systems – a type of AI system that often uses explanations