Miranda Bogen
banner
mbogen.bsky.social
Miranda Bogen
@mbogen.bsky.social
Director of the AI Governance Lab @cendemtech.bsky.social / responsible AI + policy
For deeper analysis, CDT’s recent report 𝐑𝐢𝐬𝐤𝐲 𝐁𝐮𝐬𝐢𝐧𝐞𝐬𝐬: 𝐀𝐝𝐯𝐚𝐧𝐜𝐞𝐝 𝐀𝐈 𝐂𝐨𝐦𝐩𝐚𝐧𝐢𝐞𝐬’ 𝐑𝐚𝐜𝐞 𝐟𝐨𝐫 𝐑𝐞𝐯𝐞𝐧𝐮𝐞 explores the array of business models that advanced AI companies are currently implementing or considering, including advertising, and how they are likely to affect users. cdt.org/insights/ris... (5/5)
Risky Business: Advanced AI Companies’ Race for Revenue
Companies developing advanced AI find themselves at a critical juncture. Some have transformed from research labs to product companies, others from social media platforms and internet giants to compet...
cdt.org
January 16, 2026 at 7:50 PM
AI companies should be extremely careful not to repeat the many mistakes that have been made — and harms that have resulted from — the adoption of personalized ads on social media and around the web. (4/5)
January 16, 2026 at 7:50 PM
People are using chatbots for all sorts of reasons, including as companions and advisors. There’s a lot at stake when that tool tries to exploit users’ trust to hawk advertisers’ goods. (3/5)
January 16, 2026 at 7:50 PM
Even if AI platforms don’t share data directly with advertisers, business models based on targeted advertising put really dangerous incentives in place when it comes to user privacy. This decision raises real questions about how business models will shape AI in the long run. (2/5)
January 16, 2026 at 7:50 PM
This sets a dangerous precedent for AI more broadly: without guardrails to avoid harmful outcomes, a huge variety of decisions impacting people’s financial stability, health & liberty will reflect histories of overt discrimination, a resurgence that will be disguised under an illusion of neutrality.
November 14, 2025 at 5:59 PM
The CFPB is responsible for addressing AI’s role in credit discrimination, and this proposed rule disregards that responsibility. The agency should instead direct its efforts to ensuring creditors implement fairness testing to help them prevent discrimination when adopting AI.
November 14, 2025 at 5:59 PM
Disparate impact is particularly important as AI is increasingly used in making fundamental decisions. Bias in AI’s training and design degrades its performance for certain protected groups, even without overt discriminatory intent. Disparate impact recognizes this.
November 14, 2025 at 5:59 PM
AI itself doesn’t have “intent,” and people have no real transparency regarding how creditors use AI in any aspects of credit transactions. This makes it incredibly difficult, if not impossible, to show that AI was used to intentionally discriminate against an applicant.
November 14, 2025 at 5:59 PM
Reposted by Miranda Bogen
To truly understand AI’s risks & impacts, we need sociotechnical frameworks that connect the technical with the societal. Holistic assessments can guide responsible AI deployment & safeguard safety and rights.

📖 Read more: cdt.org/insights/ado...
Adopting More Holistic Approaches to Assess the Impacts of AI Systems
by Evani Radiya-Dixit, CDT Summer Fellow As artificial intelligence (AI) continues to advance and gain widespread adoption, the topic of how to hold developers and deployers accountable for the AI systems they implement remains pivotal. Assessments of the risks and impacts of AI systems tend to evaluate a system’s outcomes or performance through methods like […]
cdt.org
January 16, 2025 at 5:47 PM