Owen J. Daniels
@ojdaniels.bsky.social
370 followers 830 following 35 posts
Writing on AI, security, & democracy. Associate Director of Analysis & Andrew W. Marshall Fellow at CSET. Probably behind deadline. Working on a book on AI and military affairs for Polity Press. https://cset.georgetown.edu/staff/owen-daniels/
Posts Media Videos Starter Packs
Reposted by Owen J. Daniels
foreignaffairs.com
“If Washington’s new AI strategy does not adequately account for open models, American AI companies, despite their world-leading models, will risk ceding international AI influence to China,” write @ojdaniels.bsky.social and @hannadohmen.bsky.social.
China’s Overlooked AI Strategy
Beijing is using soft power to gain global dominance.
www.foreignaffairs.com
Reposted by Owen J. Daniels
csetgeorgetown.bsky.social
The new AI Action Plan touts the strategic importance of open models.

That's not news to China, where developers like DeepSeek have made the PRC the leader in open-weights AI.

In @foreignaffairs.com, CSET's @ojdaniels.bsky.social & @hannadohmen.bsky.social explain how the U.S. can respond.
Reposted by Owen J. Daniels
deweyam.bsky.social
I’m proud to share @csetgeorgetown.bsky.social’s Annual Report!

From Congressional testimony and groundbreaking research to essential data tools & translations, our team continues to shape critical emerging tech policy conversations.

Grateful to everyone who makes this work possible.
Reposted by Owen J. Daniels
deweyam.bsky.social
The future of AI leadership requires thoughtful policy. @CSETGeorgetown just submitted our response to @NSF's RFI on the Development of an Artificial Intelligence Action Plan. Here's what we recommend: 🧵[1/]
Reposted by Owen J. Daniels
csetgeorgetown.bsky.social
What does the EU's shifting strategy mean for AI?

CSET's @miahoffmann.bsky.social & @ojdaniels.bsky.social have a new piece out for @techpolicypress.bsky.social.

Read it now 👇
miahoffmann.bsky.social
If you’ve ever wondered what the EU and elephants have in common - or are wondering now- read my latest piece with @ojdaniels.bsky.social! We take a look what the EU’s new innovation-friendly regulatory approach might mean for the global AI policy ecosystem www.techpolicy.press/out-of-balan...
Out of Balance: What the EU's Strategy Shift Means for the AI Ecosystem | TechPolicy.Press
Mia Hoffmann and Owen J. Daniels from Georgetown’s Center for Security and Emerging Technology say Europe's movements could change the global landscape.
www.techpolicy.press
Reposted by Owen J. Daniels
deweyam.bsky.social
U.S. AI firms, take note! An op-ed by @sambresnick.bsky.social and @colemcfaul.bsky.social in @barrons.com argues that China's "good enough" AI strategy (think DeepSeek) could disrupt the market.

It's about affordable tech. Huawei playbook 2.0? Open-source Chinese models are closing the gap, fast.
ojdaniels.bsky.social
DeepResearch prompt: "'The End of History and the Last Man' but make it AI"
ojdaniels.bsky.social
I've left a lot out: the profit opportunities and risks of increasingly agentic systems got a lot of air; emerging research on scheming, sabotage, and survival instincts of LLMs and frontier models was prominent; and practical ethics policy ideas abounded. Looking forward to sharing more ideas soon
ojdaniels.bsky.social
Across the risk spectrum, the question arose time and again: where do we actually need AI solutions? Is it actually helpful to have AI to try to help us find common ground around political disagreements, for example? Do we want more tech in our democratic processes? www.science.org/doi/10.1126/...
AI can help humans find common ground in democratic deliberation
Finding agreement through a free exchange of views is often difficult. Collective deliberation can be slow, difficult to scale, and unequally attentive to different voices. In this study, we trained a...
www.science.org
ojdaniels.bsky.social
France’s announcement at the summit that it was tapping its nuclear power industry for data centers grabbed headlines, but nuclear power is not necessarily a panacea for all of AI’s energy issues. It remains a globally significant space to watch. thebulletin.org/2024/12/ai-g...
AI goes nuclear
Big tech is turning to old reactors (and new ones) to power the energy-hungry data centers that artificial intelligence systems need. But the downsides of nuclear power—like potential nuclear weapons ...
thebulletin.org
ojdaniels.bsky.social
Environmental and energy concerns will only continue to grow with scaling, and rightfully earned much discussion. Even with model innovation’s like DeepSeek R1, which is cheaper and more efficient to train, consumption for inference will remain high.
ojdaniels.bsky.social
The AISIs have different structures and stakeholders and are attuned to particular research ecosystems, meaning they're not 1-1 matches from one nation to the next, but they can still facilitate exchange. They'll obviously face some geopolitical headwinds amid tech competition.
ojdaniels.bsky.social
Despite disappointment at executive messaging, the AI Safety Institutes leading safety work at the national level could be ideal vehicles for developing and disseminating testing, evaluation, and safety best practices. Saw some impressive presentations at side events www.aisi.gov.uk/work/safety-...
Safety cases at AISI | AISI Work
As a complement to our empirical evaluations of frontier AI models, AISI is planning a series of collaborations and research projects sketching safety cases for more advanced models than exist today, ...
www.aisi.gov.uk
ojdaniels.bsky.social
JD Vance's comments on Europe's "excessive regulation" were well covered, but EC Pres von der Leyen and Macron also championed getting out of the private sector's way. My colleague @miahoffmann.bsky.social wrote a thread about why this attitude could be troubling for Europe bsky.app/profile/miah...
miahoffmann.bsky.social
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Here’s why:
ojdaniels.bsky.social
A few thoughts on the outcomes of the AI Action Summit in Paris. The summit laid out some grand goals for AI governance (and covered them at length in the civil society portion), but the government-led portion of the summit was largely about AI enthusiasm.

www.linkedin.com/pulse/ai-act...
The AI Action Summit: Some Perspective on Paris
The recently concluded AI Action Summit in Paris, which comprised civil society and official government meetings, commenced last week with some ambitious goals. As the Center for Security and Emerging...
www.linkedin.com
Reposted by Owen J. Daniels
miahoffmann.bsky.social
There have been a ton of AI policy developments coming out of the EU these past weeks, but one deeply concerning one is the withdrawal of the AI Liability Directive (AILD) by the European Commission. Here’s why:
ojdaniels.bsky.social
Great, thought-provoking discussion and panels at AI Safety Connect in Paris today. How various national iterations of AI safety institutes cooperate (or don’t) in the year ahead will be a major area to watch.