kate brennan
banner
kate-brennan.bsky.social
kate brennan
@kate-brennan.bsky.social
associate director of ainowinstitute.org
my latest for @techpolicypress.bsky.social on the US v. Google search antitrust case and just how badly the court missed the opportunity to contend with Google’s power in the genAI market. genAI won’t magically unseat Google’s market dominance, as the court suggests, it will only deepen it
The Google search antitrust remedies trial decision fails to seriously contend with the tremendous advantages the company holds in the generative AI market due to its search monopoly says Kate Brennan, associate director at the AI Now Institute.
Decision in US vs. Google Gets it Wrong on Generative AI | TechPolicy.Press
Google will continue to enjoy the fruits of its search monopoly to secure its success in the generative AI market, writes Kate Brennan.
www.techpolicy.press
September 11, 2025 at 3:43 PM
Lofty claims to “innovation” should not put people at risk and AI firms should not be given a get-out-of-jail free card. We wrote for @techpolicypress.bsky.social how weak regulation is just as bad as none at all, and today we can see the fruits of this develop: www.techpolicy.press/the-storm-cl...
The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None | TechPolicy.Press
Blind trust in the benevolence of AI firms is not an option, write AI Now Institute's Kate Brennan, Sarah Myers West, and Amba Kak.
www.techpolicy.press
September 10, 2025 at 6:53 PM
5/ Shockingly, people can apply to the sandbox before they even have an incorporated business. This means that a firm with no clear understanding of its product risks can effectively claim that the benefits of their hypothetical product outweigh the risks and receive immunity.
September 10, 2025 at 6:51 PM
4/ Speaking of the risks, they are narrowly defined. Companies are not required to mention high-impact risks that many people face from the deployment of AI systems, including rising prices, depreciating wages, discrimination, or privacy violations.
September 10, 2025 at 6:51 PM
3/ Companies must state in their applications that they are mitigating consumer risks, but there’s no enforcement mechanism to ensure they actually follow through. This means that we will be exposed to risky AI products for up to 10 years with no legal recourse.
September 10, 2025 at 6:50 PM
2/ Cruz will try to say that the sandbox is temporary. But AI companies can renew their participation for up to 8 additional years, preventing agencies from enforcing the law against them for 10 years. (Remember: the proposed moratorium was also ten years!)
September 10, 2025 at 6:50 PM
1/ In the SANDBOX Act, Senator Cruz unveiled a federal sandbox program for AI companies. A federal sandbox preempts companies from following the law for two years, in effect making it no different from a moratorium. shorturl.at/TSKn4
September 10, 2025 at 6:50 PM
In today’s Senate Commerce Hearing the White House endorsed support for federal preemption of state AI laws. The fight against preemption did not disappear with the moratorium—in fact, Sen. Cruz introduced a bill today putting us directly on the path to preemption. A thread on its risks below: 🧵
Anthropic CEO Dario Amodei’s recent Times op-ed on AI regulation seems like a reasonable middle ground. But it is also a reminder of a threat on the horizon: an industry-scripted federal standard that would effectively eclipse state legislation, write Kate Brennan, Sarah Myers West, and Amba Kak.
The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None | TechPolicy.Press
Blind trust in the benevolence of AI firms is not an option, write AI Now Institute's Kate Brennan, Sarah Myers West, and Amba Kak.
www.techpolicy.press
September 10, 2025 at 6:49 PM
congrats to the supreme court for ignoring decades of circuit precedent, twisting logic to avoid textbook definitions (despite being obsessed with, uh, textualism), and undermining the equal protection clause to ensure trans kids can't receive the medical care they deserve
NEW: In U.S. v. Skrmetti, a closely watched case on a Tennessee law barring certain medical care for transgender minors, the court holds that Tennessee's law can remain in place.
June 18, 2025 at 4:29 PM
the proposed moratorium on state AI laws is dangerous. a welcome chorus (with unlikely allies!) is rising against the ban. our latest in @techpolicypress.bsky.social argues we must use this momentum to demand more accountability from AI firms and protect against weak, industry co-opted regulation
Anthropic CEO Dario Amodei’s recent Times op-ed on AI regulation seems like a reasonable middle ground. But it is also a reminder of a threat on the horizon: an industry-scripted federal standard that would effectively eclipse state legislation, write Kate Brennan, Sarah Myers West, and Amba Kak.
The Storm Clouds Looming Past the State Moratorium: Weak Regulation is as Bad as None | TechPolicy.Press
Blind trust in the benevolence of AI firms is not an option, write AI Now Institute's Kate Brennan, Sarah Myers West, and Amba Kak.
www.techpolicy.press
June 11, 2025 at 6:08 PM
Reposted by kate brennan
this is excellent
NEW REPORT: Artificial Power, our 2025 Landscape Report, is out.

Today’s AI isn’t just being used by us, it’s being used on us. We urgently need to reclaim public power over the future trajectory of AI. Another path is possible.

Read the report: ainowinstitute.org/2025-landscape
June 4, 2025 at 6:37 PM
Reposted by kate brennan
“We're not interested in discussing whether or not an individual technology like ChatGPT is good. We're asking whether it's good for society that these companies have unaccountable power," says @kate-brennan.bsky.social in @wired.com
June 4, 2025 at 1:05 PM
if you’ve been looking for an all-in-one resource to explain why it’s troubling for tech companies to push AI into every corner of our social, political, and economic lives, you might love our latest report from @ainowinstitute.bsky.social called Artificial Power: ainowinstitute.org/publications...
Artificial Power: 2025 Landscape Report
In the aftermath of the “AI boom,” this report examines how the push to integrate AI products everywhere grants AI companies - and the tech oligarchs that run them - power that goes far beyond their d...
ainowinstitute.org
June 3, 2025 at 2:38 PM
A remedy proposal in one antitrust case may seem niche, but the deregulatory patterns are written on the wall. This is a time we need more--not less--scrutiny of how AI companies are shaping our economic, political, and cultural lives for our loss and their profit (2/2)
March 13, 2025 at 6:58 PM
Bold enforcement remedies are crucial to meet this moment in AI shaped by Big Tech dominance. My latest for @techpolicypress.bsky.social argues that removing AI divestiture as a remedy in the Google search monopoly case fits the troubling anti-regulatory patterns taking shape around the world (1/2)
Last week, the DOJ removed the AI divestiture remedy from its revised proposed final judgment. It was a bold and necessary requirement, argues AI Now Institute Associate Director Kate Brennan, but it appears to have succumbed amidst anti-regulatory headwinds taking shape across the globe.
Felled by the Deregulatory Headwinds: DOJ’s Reversal on AI Divestiture in the Google Search Case | TechPolicy.Press
AI Now Institute Associate Director Kate Brennan writes that the DOJ relaxed its posture on a provision in its proposed remedies to Google's Search monopoly.
buff.ly
March 13, 2025 at 6:58 PM
I spoke with @jasonplautz.bsky.social about how essential energy dominance is to this administration's policy of AI boosterism, and the harmful effects this is sure to have on climate and communities:
David Sacks may not be an energy wonk. But as Trump's AI and Crypto czar, his work to unshackle the tech industry will have huge implications for the grid and the climate. Where does he stand? www.eenews.net/articles/ene...
Energy is AI’s barrier to entry. David Sacks knows it.
President Donald Trump’s adviser on artificial intelligence wants the U.S. to go big on data centers. Powering the tech boom is a harder question.
www.eenews.net
February 12, 2025 at 8:09 PM
Reposted by kate brennan
OpenAI furious DeepSeek might have stolen all the data OpenAI stole from us

🔗 www.404media.co/openai-furio...
January 29, 2025 at 3:43 PM
Reposted by kate brennan
In a new opinion piece for @nytimes.com, AI Now’s Chief AI Scientist @heidykhlaaf.bsky.social and Co-Executive Director @smw.bsky.social warn that AI may threaten, rather than preserve, national security.

Read more: www.nytimes.com/2025/01/27/o...
Opinion | Our Military Is Adopting A.I. Way Too Fast
The military is integrating A.I. into its deadly systems too quickly, and Trump will only accelerate a dangerous situation.
www.nytimes.com
January 27, 2025 at 4:05 PM
Reposted by kate brennan
As someone who has reported on AI for 7 years and covered China tech as well, I think the biggest lesson to be drawn from DeepSeek is the huge cracks it illustrates with the current dominant paradigm of AI development. A long thread. 1/
January 27, 2025 at 2:12 PM
Reposted by kate brennan
At a convening of worker advocates in California organized by @ucblaborcenter.bsky.social, @ambakak.bsky.social told @khari.bsky.social, “Labor has been at the forefront of rebalancing of power and asserting that the public has a say in determining how and under what conditions this tech is used."
January 17, 2025 at 7:19 PM
If the California fight is any indication, however, even the lightest-touch regulation in the bill will face massive industry lobbying—a deeply troubling prospect. (3/3)
January 9, 2025 at 4:08 PM
Watching LA burn and fire hydrants run dry knowing that large-scale AI systems consume millions of gallons of water is as urgent a "catastrophic risk" as those that may emerge from frontier models in the future (2/3)
January 9, 2025 at 4:08 PM
I spoke with MIT Tech Review about reviving the failed California AI safety bill SB 1047 in New York and how the bill overlooks material harms AI is posing to people, workers, and the climate right now: (1/3) www.technologyreview.com/2025/01/09/1...
A New York legislator wants to pick up the pieces of the dead California AI bill
The new bill being drafted in New York aims to regulate advanced AI systems while addressing concerns with the California bill.
www.technologyreview.com
January 9, 2025 at 4:08 PM
In the past *five days alone* we’ve seen the NDAA authorize dozens of troubling AI provisions and heard the Biden Administration tease fast-tracking data center construction for AI. One report shouldn't shift our attention away from these.
December 20, 2024 at 7:55 PM
My statement on the Bipartisan House Task Force Report on AI on @techpolicypress.bsky.social. In one breath the report cautions against material risks posed by large-scale AI, and in the other encourages widespread, uncritical adoption of AI across the economy. www.techpolicy.press/reactions-to...
December 20, 2024 at 7:54 PM