Mia Hoffmann
@miahoffmann.bsky.social
160 followers 240 following 42 posts
AI governance, harms and assessment | Research fellow @csetgeorgetown.bsky.social
Posts Media Videos Starter Packs
miahoffmann.bsky.social
🤖✨ New report with @partnershipai.bsky.social!
AI agents pose new risks. Monitoring is essential to ensure effective oversight and intervention when needed. Our paper presents a framework for real-time failure detection that takes into account stakes, reversibility and affordances of agent actions.
Reposted by Mia Hoffmann
vikramvenkatram.bsky.social
Yesterday's new AI Action Plan has a lot worth discussing!

One interesting aspect is its statement that the federal government should withhold AI-related funding from states with "burdensome AI regulations."

This could be cause for concern.
Reposted by Mia Hoffmann
csetgeorgetown.bsky.social
⚖️ New Explainer!

Effectively evaluating AI models is more crucial than ever. But how do AI evaluations actually work?

In their new explainer,
@jessicaji.bsky.social, @vikramvenkatram.bsky.social &
@stephbatalis.bsky.social break down the different fundamental types of AI safety evaluations.
Reposted by Mia Hoffmann
hlntnr.bsky.social
💡Funding opportunity—share with your AI research networks💡

Internal deployments of frontier AI models are an underexplored source of risk. My program at @csetgeorgetown.bsky.social just opened a call for research ideas—EOIs due Jun 30.

Full details ➡️ cset.georgetown.edu/wp-content/u...

Summary ⬇️
miahoffmann.bsky.social
Finally, and critically: central data collection and dissemination of lessons learned means that harms only have to occur once for everyone to mitigate their risk. This prevents recurrence and builds user and consumer confidence, which is essential for widespread AI adoption.
miahoffmann.bsky.social
Incident tracking also reveals new, unexpected AI failure modes that we aren’t yet mitigating against. Over time, systematic data collection can help detect emerging risks and new types of harms, a critical benefit given the fast pace of AI innovation and deployment.
miahoffmann.bsky.social
Over time, incident data can be used to evaluate the effectiveness of new safety policies and regulation through before and after comparisons. This helps refine governance policies through a direct feedback loop.
miahoffmann.bsky.social
Using real-world data on what works and what doesn’t to guide AI safety research will help us innovate quicker and build reliable systems that are safe to deploy faster. In this way, incident reporting can help prioritize and direct AI safety research to where it is most effective.
miahoffmann.bsky.social
AI incidents also shed light on the effectiveness of existing safety efforts. We might learn where current technical standards or risk management processes are insufficient to protect people from harm, revealing critical gaps that can be addressed by AI safety research.
miahoffmann.bsky.social
For instance, we can learn about *how* the use of AI results in harm, e.g. through misuse, user error or AI failure. This information helps channel resources to the right kinds of safety efforts, since preventing misuse requires different measures than addressing operator error.
miahoffmann.bsky.social
Why should the government do this?
What makes AI risk management so tricky is predicting how deploying an AI system can go wrong. AI incidents are a rich source of information about AI harms, harm mechanisms, AI failure modes and more. Leveraging those insights can make AI use safer.
miahoffmann.bsky.social
Broadly speaking, an AI incident reporting regime has 4 core parts:
1) Incident detection;
2) Reporting to oversight bodies and inclusion in incident database;
3) Performance of impact assessments and root cause analyses; and
4) Dissemination of lessons learned
miahoffmann.bsky.social
First, a definition. AI incidents are situations in which a deployed AI system is implicated in harm, e.g. when an AI recruiting tool makes a biased hiring decision. Incidents are varied and often take unexpected forms, so go check out the AIID for more real-world examples! incidentdatabase.ai
Welcome to the Artificial Intelligence Incident Database
The starting point for information about the AI Incident Database
incidentdatabase.ai
miahoffmann.bsky.social
Today, @csetgeorgetown.bsky.social published our recommendations for the U.S. AI Action Plan. One of them is a CSET evergreen: implement an AI incident reporting regime for AI used by the federal government. Why? Short answer: because we can learn a ton from incidents! Long answer: 👇
Reposted by Mia Hoffmann
csetgeorgetown.bsky.social
🚨We're hiring — only a few days left to apply!🚨

CSET is looking for a Media Engagement Specialist to amplify our research. If you're a strategic communicator who can craft press releases, media pitches, & social content, apply by March 17, 2025! cset.georgetown.edu/job/media-en...
Media Engagement Specialist | Center for Security and Emerging Technology
The Center for Security and Emerging Technology, under the School of Foreign Service, is a research organization focused on studying the security impacts of emerging technologies, supporting academic ...
cset.georgetown.edu
Reposted by Mia Hoffmann
Reposted by Mia Hoffmann
csetgeorgetown.bsky.social
What does the EU's shifting strategy mean for AI?

CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.

Read it now 👇
miahoffmann.bsky.social
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
Reposted by Mia Hoffmann
miahoffmann.bsky.social
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
Reposted by Mia Hoffmann
miahoffmann.bsky.social
Thirdly, and most importantly, this decision reveals that the new European Commission is buying into the false narrative of innovation versus regulation which already dominates - and paralyzes - US tech policy.