Meredith Broussard, PhD
@merbroussard.bsky.social
19K followers 410 following 300 posts
Critical AI, data journalism, literary nonfiction. Professor at NYU. Author, "More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech." meredithbroussard.com
Posts Media Videos Starter Packs
Reposted by Meredith Broussard, PhD
datasociety.bsky.social
“Privacy-preserving” isn’t as private as you might think. Our new brief, published in collab w @powerswitchaction.org & Coworker, exposes how so-called “privacy-preserving” technologies can actually enable *more* worker surveillance — and what we can do about it. datasociety.net/library/the-...
Reposted by Meredith Broussard, PhD
hypervisible.blacksky.app
Altman claims the company didn’t anticipate people not wanting their deepfakes to say “offensive things or things that they find deeply problematic,” which sounds like a lie but is also indicative of how they recklessly release tech into the world.
OpenAI wasn’t expecting Sora’s copyright drama
It felt “more different to images than people expected.”
www.theverge.com
Reposted by Meredith Broussard, PhD
hypervisible.blacksky.app
“One of the negative consequences AI is having on students is that it is hurting their ability to develop meaningful relationships with teachers, the report finds. Half of the students agree that using AI in class makes them feel less connected to their teachers.”
Rising Use of AI in Schools Comes With Big Downsides for Students
A report by the Center for Democracy and Technology looks at teachers' and students' experiences with the technology.
www.edweek.org
Reposted by Meredith Broussard, PhD
richardfletcher.bsky.social
A thread on how people's use of generative AI has changed in the last year - based on survey data from 6 countries (🇬🇧🇺🇸🇫🇷🇩🇰🇯🇵🇦🇷 ).

First, gen AI use has grown rapidly.

Most people have tried out gen AI at least once (61%), and 34% now use it on a weekly basis - roughly doubling from 18% a year ago.
Reposted by Meredith Broussard, PhD
hypervisible.blacksky.app
“…that Sora is being used for stalking and harassment will likely not be an edge case, because deepfaking yourself and others into videos is one of its core selling points.”

Far from an edge case, it’s the primary use case.
Stalker Already Using OpenAI's Sora 2 to Harass Victim
A journalist claims that her stalker used Sora 2, the latest video app from OpenAI, to churn out videos of her.
futurism.com
merbroussard.bsky.social
This is the first year I have seen multiple students taking notes by hand because they have a deeper understanding of how information is most effectively encoded in their brains. It’s exciting to see!
merbroussard.bsky.social
I definitely thought about getting printed coursepacks for this semester, but I don’t know where to order them!
merbroussard.bsky.social
Shoutout to all the first-year students who left home, grew tiny mustaches, and are debuting them in-person at parents' weekend.
merbroussard.bsky.social
King Chunk
explore.org
Meet your FAT BEAR WEEK 2025 champion.

Chunk the Hunk. The Chunkster. 32 Chunk.

All hail the new king of Brooks River 👑
Reposted by Meredith Broussard, PhD
jbakcoleman.bsky.social
One of the loudest bells tolling for social science right now is that the decade of abundant data on humans is coming to a close. Whether you work on digital trace or surveys, LLM pollution is a serious problem, even while the wide roll out of LLMs creates an urgent need for social science.
dingdingpeng.the100.ci
A lot of psych is already conducted with online convenience samples & ppl are probably excited about silicon samples bc it would allow them to crank out more studies for even less 💸

How about we reconsider the idea that sciencey science involves collecting own data.
www.science.org/content/arti...
AI-generated ‘participants’ can lead social science experiments astray, study finds
Data produced by “silicon samples” depends on researchers’ exact choice of models, prompts, and settings
www.science.org
Reposted by Meredith Broussard, PhD
404media.co
New: landlords are demanding potential tenants hand over employer login credentials so a tool can verify their income. We were sent screenshot of the tool, Argyle, downloading much more data than necessary to approve the renter. "Opt-out means no housing" www.404media.co/landlords-de...
Landlords Demand Tenants’ Workplace Logins to Scrape Their Paystubs
Screenshots shared with 404 Media show tenant screening services ApproveShield and Argyle taking much more data than they need. “Opt-out means no housing.”
www.404media.co
Reposted by Meredith Broussard, PhD
alondra.bsky.social
"Data centers are proliferating in VA and a blind man in [MD] is suddenly contending with sharply higher power bills...It’s an increasingly dramatic ripple effect of the AI boom as energy-hungry data centers...[pull]...households into paying for the digital economy" www.bloomberg.com/graphics/202...
AI Data Centers Are Sending Power Bills Soaring
Wholesale electricity costs as much as 267% more than it did five years ago in areas near data centers. That’s being passed on to customers.
www.bloomberg.com
Reposted by Meredith Broussard, PhD
carlquintanilla.bsky.social
“.. OpenAI spent more on marketing and equity options for its employees than it made in revenue in the first half of 2025. That single fact sums up where we are in the AI cycle more neatly than anything we could ever write, so we’ll just end there.”

@financialtimes.com
www.ft.com/content/908d...
Reposted by Meredith Broussard, PhD
nitasha.bsky.social
Whose video did OpenAI take to train Sora? @kevinschaul.bsky.social and I are back on our training data bullshit to find out. Gift link 🎁 t.co/c3Zg4gcYIT
Reposted by Meredith Broussard, PhD
hypervisible.blacksky.app
“This worldwide update includes the ability for parents, as well as law enforcement, to receive notifications if a child—in this case, users between the ages of 13 and 18—engages in chatbot conversations about self harm or suicide.”
OpenAI Adds Parental Safety Controls for Teen ChatGPT Users. Here’s What to Expect
OpenAI’s review process for teenage ChatGPT users who are flagged for suicidal ideation includes human moderators. Parents can expect an alert about alarming prompts within hours.
www.wired.com