Frank Pasquale
banner
frankpasquale.bsky.social
Frank Pasquale
@frankpasquale.bsky.social
Law professor; author (The Black Box Society; New Laws of Robotics).

Interested in law & technology, AI, political economy, art, and social theory.
On Marclay’s “The Clock:” “Spanning a century of cinema and numerous genres, this hypnotic, often humorous looped film examines media’s role as both mirror and escape, urging us to confront time as both lived experience and cinematic illusion.”
www.frieze.com/article/roun...
The 25 Best Works of the 21st Century
Join us as we count down the defining works that have shaped contemporary art since 2000
www.frieze.com
November 25, 2025 at 2:30 AM
Reposted by Frank Pasquale
This August as I sheltered indoors from wildfire smoke, I got a moving note from MIT statisticians - my disability story had informed a fundamental insight about wildfire smoke forecasting that could someday help AI systems protect public health.

Here's the story:

natematias.com/portfolio/20...
Is it safe to go outside? Including disability voices in tests of wildfire smoke forecasts
What counts as a useful pollution forecast when you’re trying to manage your physical health?
natematias.com
November 25, 2025 at 1:17 AM
I'm very happy to have published a chapter in this excellent, open-access collection, "Being Human in the Digital World:"
www.cambridge.org/core/service...
Kudos to expert editors Beate Roessler and Valerie Steeves, as well as fellow authors Julie Cohen, David Lyon, Helen Nissenbaum, & more!
www.cambridge.org
November 25, 2025 at 1:23 AM
Reposted by Frank Pasquale
New article with @mvaldeb.bsky.social out now in the @jcultecon.bsky.social:

"(Re)Inventing influence: valuation and justification in the influencer marketing industry". shorturl.at/HzPCi

We explore how influencer marketing companies and their algorithmic tools define and measure “influence”.
(Re)Inventing influence: valuation and justification in the influencer marketing industry
This article examines how influencer marketing companies and their algorithmic tools define and measure ‘influence' in Latin America. Drawing on Luc Boltanski and Laurent Thévenot's ‘orders of wort...
shorturl.at
November 6, 2025 at 1:04 PM
Reposted by Frank Pasquale
17x strike policy testified about in court too. #smh
Instagram’s former head of safety and well-being Vaishnavi Jayakumar testified the company had a “17x” strike policy for accounts that engaged in the trafficking of humans for sex.

“You could incur 16 violations and upon the 17th violation, your account would be suspended"

time.com/7336204/meta...
7 Allegations Against Meta in Newly Unsealed Filings
Court filings allege Meta tolerated sex trafficking, hid harms to teens, and prioritized growth over user safety for years.
time.com
November 23, 2025 at 8:54 PM
Reposted by Frank Pasquale
Paper suggests users are “unaware of the extent of the AI’s influence, rendering them more susceptible to it … a mechanism wherein AI systems amplify biases, which are further internalized by humans, triggering a snowball effect where small errors in judgement escalate into much larger ones.” (2024)
How human–AI feedback loops alter human perceptual, emotional and social judgements - Nature Human Behaviour
Glickman and Sharot reveal a human–AI feedback loop, where AI amplifies subtle human biases, which are then further internalized by humans. This cycle, observed across various domains, leads to substa...
www.nature.com
November 20, 2025 at 1:55 PM
Reposted by Frank Pasquale
New from @jpquintais.bsky.social & me: a critical and wide analysis of the Obligations of all General-Purpose AI Model providers under the EU AI Act. We give a deep ~50 page, 300 footnote treatment and find so many tensions, loopholes, inconsistencies and more.

Link: files.michae.lv/papers/Veale...
November 18, 2025 at 1:56 PM
Reposted by Frank Pasquale
Important and timely work by health law extraordinaire @jennoliva.bsky.social featured on @pbsnews.org — discussing the AI battle between private insurers (who use AI to deny claims) & patients (who use new AI tools to draft effective appeals on those denials) . 🔥🔥🔥 www.pbs.org/newshour/sho...
How patients are using AI to fight back against denied insurance claims
As health insurers increasingly rely on artificial intelligence to process claims, denials have been on the rise. In 2023, about 73 million Americans on Affordable Care Act plans had their claims for ...
www.pbs.org
November 22, 2025 at 11:10 PM
“The position of fiction is plummeting irretrievably while other forms of storytelling are damaging or misusing the tools of narrative.”
thebaffler.com/odds-and-end...
November 22, 2025 at 7:59 PM
Reposted by Frank Pasquale
“AI workers said they distrust the models they work on because of a consistent emphasis on rapid turnaround time at the expense of quality.”
Meet the AI workers who tell their friends and family to stay away from AI
When the people making AI seem trustworthy are the ones who trust it the least, it shows that incentives for speed are overtaking safety, experts say
www.theguardian.com
November 22, 2025 at 5:12 PM
Reposted by Frank Pasquale
The revisions show that the “CDC cannot currently be trusted as a scientific voice,” said Demetre Daskalakis, who formerly led the agency’s center responsible for respiratory viruses and immunizations. www.washingtonpost.com/health/2025/...
Under RFK Jr., CDC promotes false vaccines-autism link it once discredited
The CDC’s website now says health authorities ignored evidence of a potential connection between vaccines and autism, despite dozens of studies showing no link.
www.washingtonpost.com
November 20, 2025 at 12:19 PM
“Just 18 per cent of voters supported the effort to stop states regulating AI. Research by Pew in September found that about half of Americans feared AI would be detrimental to forming relationships.”
www.ft.com/content/e087...
November 21, 2025 at 10:59 PM
Reposted by Frank Pasquale
Judges have become ‘human filters’ as AI in Australian courts reaches ‘unsustainable phase’, chief justice says www.theguardian.com/law/2025/nov...
Judges have become ‘human filters’ as AI in Australian courts reaches ‘unsustainable phase’, chief justice says
Stephen Gageler warns the speed of AI’s development could be outstripping people’s ability to ‘comprehend its potential risks and rewards’
www.theguardian.com
November 21, 2025 at 12:39 PM
Reposted by Frank Pasquale
Studying philosophy does make people better thinkers, according to new research on more than 600,000 college grads

Philosophy majors rank higher than all other majors on verbal and logical reasoning. theconversation.com/studying-phi... #philosophy #skills #thinking #PhilosophySky #philsky
Studying philosophy does make people better thinkers, according to new research on more than 600,000 college grads
Philosophers are fond of saying that their field boosts critical thinking. Two of them decided to put that claim to the test.
theconversation.com
November 21, 2025 at 1:04 PM
Reposted by Frank Pasquale
Relying on ChatGPT to teach you about a topic leaves you with shallower knowledge than Googling and reading about it, according to new research that compared what more than 10,000 people knew after using one method or the other.

Shared by @gizmodo.com: buff.ly/yAAHtHq
November 21, 2025 at 11:48 AM
"Rather than relying on the unsupervised inferences of large models, the system develops a basis in clear “if A, then B” judgements, thereby refining its AI solutions and industrial operating system."
www.sinification.com/p/chinas-str...
November 20, 2025 at 2:32 PM
“Complicated budget scoring and procedural arcana haven’t stopped yawning deficits, but they have made the process so cumbersome and unintuitive that agencies and Congress have given up.”
hypertext.niskanencenter.org/p/democrats-...
Democrats’ Wile E. Coyote Problem
For two decades, Democrats thought they could outrun the broken operating systems of government through managerial excellence. It didn’t work before, and it won’t work now.
hypertext.niskanencenter.org
November 19, 2025 at 10:53 PM
Reposted by Frank Pasquale
new paper by Sean Westwood:

With current technology, it is impossible to tell whether survey respondents are real or bots. Among other things, makes it easy for bad actors to manipulate outcomes. No good news here for the future of online-based survey research
November 18, 2025 at 7:16 PM
“Our first point of contact with most information is rarely the information itself but some lossily compressed derivative that’s already been processed and strained through a dozen layers of reinterpretation.”
nymag.com/intelligence...
November 17, 2025 at 2:05 PM
Its “most subtle piece of deep infrastructure is its more than 70-million-person industrial workforce. Thanks to intense buildup of complex manufacturing supply chains, Chinese factory managers, engineers, and workers have decades of process knowledge.”
www.foreignaffairs.com/china/real-c...
The Real China Model
Beijing’s enduring formula for wealth and power.
www.foreignaffairs.com
November 15, 2025 at 11:58 PM
“Good politics also requires…a healthy public sphere, in which at least the most egregiously bad ideas and bad actors are subject to sufficient scrutiny that they are weeded out.”
chrisdillow.substack.com/p/on-incompe...
November 15, 2025 at 3:11 PM
“Under Josef Stalin, Soviet agronomist Trofim Lysenko advanced pseudoscientific doctrines to align with party ideology, while orchestrating the purge of biologists and geneticists who upheld empirical standards.”
techpolicy.press/how-politica...
November 15, 2025 at 1:13 PM
Reposted by Frank Pasquale
“Theirs is one of at least six defamation cases filed in the United States in the past two years over content produced by A.I. tools that generate text and images.” www.nytimes.com/2025/11/12/b... @nytimes.com #ArtificialIntelligence
Who Pays When A.I. Is Wrong?
www.nytimes.com
November 14, 2025 at 3:53 AM
“Strategic pursuit of societal ignorance is a threat to achievements in areas ranging from public health to the rule of law and international stability.”
techpolicy.press/the-united-s...
November 13, 2025 at 2:14 AM
Here comes a “world mediated not just by publications or social networks but by omnipurpose AI products that assure us they’re ‘maximally truth-seeking’ or ‘objective’ as they simply tell” algorithmically personalized audiences what they want to hear.
nymag.com/intelligence...
Elon Musk’s Grokipedia Is a Warning
The centibillionaire’s Wikipedia clone is ridiculous. It’s also a glimpse of the future.
nymag.com
November 13, 2025 at 1:21 AM