Miranda Bogen
banner
mbogen.bsky.social
Miranda Bogen
@mbogen.bsky.social
Director of the AI Governance Lab @cendemtech.bsky.social / responsible AI + policy
It's happening. OpenAI is piloting ads in ChatGPT. openai.com/index/our-ap...

In introducing ads to ChatGPT, OpenAI is starting down a risky path. (1/5)
January 16, 2026 at 7:50 PM
Reposted by Miranda Bogen
And sure enough, OpenAI just announced it would be introducing ads to ChatGPT.

Good thing @mbogen.bsky.social & I wrote about the incentives this would create for AI companies, and how those incentives were likely to shape the user experience. TL;DR: it's not great!

#itsthebusinessmodel
New report from @mbogen.bsky.social & yours truly, on how the big AI companies are trying to make money and what it means for all of us.

I am more proud of the title than I have any right to be.
🚨NEW REPORT from CDT’s @mbogen.bsky.social & @nathaliemarechal.net: Risky Business: Advanced AI Companies’ Race for Revenue. It explores how frontier AI companies’ business models and structures can shape user rights, safety, and the future of AI. cdt.org/insights/ris...
January 16, 2026 at 7:32 PM
Reposted by Miranda Bogen
a recent New York State audit of NYC's Local Law 144 — designed to ostensibly regulate potential bias and discrimination in automated employment tools — is fairly scathing in its assessment of how implementation and enforcement of the law is going.

simply put, LL 144 does not work.
January 8, 2026 at 3:11 PM
Reposted by Miranda Bogen
New report from @mbogen.bsky.social & yours truly, on how the big AI companies are trying to make money and what it means for all of us.

I am more proud of the title than I have any right to be.
🚨NEW REPORT from CDT’s @mbogen.bsky.social & @nathaliemarechal.net: Risky Business: Advanced AI Companies’ Race for Revenue. It explores how frontier AI companies’ business models and structures can shape user rights, safety, and the future of AI. cdt.org/insights/ris...
January 7, 2026 at 8:36 PM
Reposted by Miranda Bogen
New from CDT: “A Roadmap for Responsible Approaches to AI Memory” by @mbogen.bsky.social & Ruchika Joshi explores how AI systems store, recall, and use info—and what that means for privacy, transparency, and user control. cdt.org/insights/a-r...
December 12, 2025 at 1:30 AM
Reposted by Miranda Bogen
[NeurIPS '25] Our oral slot and poster session on "Machine Unlearning Doesn't Do What You Think: Lessons for Generative AI Policy and Research" are tomorrow, December 4! [https://arxiv.org/abs/2412.06966]

Oral: 3:30-4pm PST, Upper Level Ballroom 20AB

Poster 1307: 4:30:-7:30pm PST, Exhibit Hall C-E
December 3, 2025 at 9:02 PM
The CFPB proposed a new rule where it would no longer recognize disparate impact liability when enforcing the Equal Credit Opportunity Act. This would eliminate a key protection against discrimination in access to credit, including when AI is involved.
www.federalregister.gov/documents/20...
Equal Credit Opportunity Act (Regulation B)
The Consumer Financial Protection Bureau (Bureau or CFPB) is issuing a proposed rule for public comment that amends provisions related to disparate impact, discouragement of applicants or prospective ...
https://www.federalregister.gov/documents/2025/11/13/2025-19864/equal-credit-opportunity-act-regulation-b]
November 14, 2025 at 5:59 PM
Reposted by Miranda Bogen
🚨Call for policy proposals

If AI adoption is not slowing down, policy governing safety and security practices needs to speed up. This is where you come in.
October 16, 2025 at 2:42 PM
Reposted by Miranda Bogen
AI companies are starting to build more and more personalization into their products, but there's a huge personalization-sized hole in conversations about AI safety/trust/impacts.

Delighted to feature @mbogen.bsky.social on Rising Tide today, on what's being built and why we should care:
July 22, 2025 at 12:49 AM
AI companies are starting to promise personalized assistants that “know you.” We’ve seen this playbook before — it didn’t end well.

In a guest post for @hlntnr.bsky.social’s Rising Tide, I explore how leading AI labs are rushing toward personalization without learning from social media’s mistakes
Personalized AI is rerunning the worst part of social media's playbook
The incentives, risks, and complications of AI that knows you
open.substack.com
July 21, 2025 at 6:32 PM
Reposted by Miranda Bogen
Personalization is political. Very excited to share a piece I co-authored with @mbogen.bsky.social as a Google Public Policy Fellow @cendemtech.bsky.social!

cdt.org/insights/its...
It’s (Getting) Personal: How Advanced AI Systems Are Personalized
This brief was co-authored by Princess Sampson. Generative artificial intelligence has reshaped the landscape of consumer technology and injected new dimensions into familiar technical tools. Search e...
cdt.org
May 5, 2025 at 4:51 PM
Reposted by Miranda Bogen
From CDT’s @mbogen.bsky.social: “As #AI companies are racing to put out increasingly advanced systems, they also seem to be cutting more and more corners on safety, which doesn’t add up.” www.ft.com/content/8...
OpenAI slashes AI model safety testing time
Testers have raised concerns that its technology is being rushed out without sufficient safeguards
www.ft.com
April 11, 2025 at 6:29 PM
Reposted by Miranda Bogen
To truly understand AI’s risks & impacts, we need sociotechnical frameworks that connect the technical with the societal. Holistic assessments can guide responsible AI deployment & safeguard safety and rights.

📖 Read more: cdt.org/insights/ado...
Adopting More Holistic Approaches to Assess the Impacts of AI Systems
by Evani Radiya-Dixit, CDT Summer Fellow As artificial intelligence (AI) continues to advance and gain widespread adoption, the topic of how to hold developers and deployers accountable for the AI systems they implement remains pivotal. Assessments of the risks and impacts of AI systems tend to evaluate a system’s outcomes or performance through methods like […]
cdt.org
January 16, 2025 at 5:47 PM
Reposted by Miranda Bogen
CDT’s Amy Winecoff + @mbogen.bsky.social new explainer dives into the fundamentals of hypothesis testing, how auditors can apply it to AI systems, & where it might fall short. Using simulations, we show its role in detecting bias in a hypothetical hiring algorithm. cdt.org/insights/hyp...
Hypothesis Testing for AI Audits
Introduction AI systems are used in a range of settings, from low-stakes scenarios like recommending movies based on a user’s viewing history to high-stakes areas such as employment, healthcare, finance, and autonomous vehicles. These systems can offer a variety of benefits, but they do not always behave as intended. For instance, ChatGPT has demonstrated bias […]
cdt.org
January 16, 2025 at 7:23 PM
Reposted by Miranda Bogen
NEW REPORT: CDT AI Governance Lab’s’s Assessing AI reportAudits looks at the rise of complex automated systems which demand a robust ecosystem for managing risks and ensuring accountability. cdt.org/insights/ass... cc: @mbogen.bsky.social
January 16, 2025 at 5:37 PM
Reposted by Miranda Bogen
@upturn.org is hiring for a research associate! Excellent opportunity to work with some fantastic folks! www.upturn.org/join/researc...
Upturn Seeks a Research Associate
This position is ideal for someone who is excited about sharp, interdisciplinary research on a range of topics related to technology, policy, and justice.
www.upturn.org
December 17, 2024 at 1:13 PM
Reposted by Miranda Bogen
howdy!

the Georgetown Law Journal has published "Less Discriminatory Algorithms." it's been very fun to work on this w/ Emily Black, Pauline Kim, Solon Barocas, and Ming Hsu.

i hope you give it a read — the article is just the beginning of this line of work.

www.law.georgetown.edu/georgetown-l...
November 18, 2024 at 4:40 PM