Jed Brown
banner
jedbrown.org
Jed Brown
@jedbrown.org
Prof developing fast algorithms, reliable software, and healthy communities for computational science. Opinions my own. https://hachyderm.io/@jedbrown

https://PhyPID.org | aspiring killjoy | against epistemicide | he/they
Economists are finally catching on to the obvious con. I wonder if the people (there are many) who correctly diagnosed the con years ago will ever be listened to.
January 11, 2026 at 1:04 AM
Reposted by Jed Brown
“We can no longer lend our credibility to an organization that has lost its integrity… each of us independently reached the decision to resign in protest of the actions of an administration that treats science not as a process for building knowledge, but as a means to advance its political agenda.”
The NIH has lost its scientific integrity. So we left
“We can no longer lend our credibility to an organization that has lost its integrity,” write four scientists and administrators who recently resigned from the NIH.
www.statnews.com
January 10, 2026 at 2:07 PM
Reposted by Jed Brown
I genuinely believe the way out of this political moment is to pick up the causes our oppressors want us to leave behind—movements like #MeToo and Black Lives Matter.

I was thrilled to get a chance to talk about that with @ctpublic.bsky.social. (And the other guest is @kattenbarge.bsky.social!)
After #MeToo, what has changed?
While #MeToo went viral in 2017, the Me Too movement has been around for 20 years. This hour, we explore the role social media can play for survivors and what, if anything, has changed.
www.ctpublic.org
January 9, 2026 at 6:50 PM
Reposted by Jed Brown
Friends, at a time when science is under attack & people are economically hurting, the ACM, Association for Computing Machinery, is introducing paywalls that didn't exist.

If you're opposed to this measure, join me in signing this petition asking the ACM to stop.
www.ipetitions.com/petition/res...
January 7, 2026 at 4:18 PM
This is for anyone who thinks they don't need to at least practice due diligence that output from an LLM is not copyright infringement or (obvious) plagiarism. (I contend it should be understood as both, even when the emitted text is not near-verbatim, because the mechanism is identical.)
January 8, 2026 at 4:09 PM
Owners of the Unlicensed Practice of Medicine Machine, who advertise it as such but for a disclaimer, also believe "it's not clinical" is an airtight loophole to violate HIPAA. The fed gov won't protect patients or mitigate baseless malpractice allegations incited by the conspiracy-affirming bot.
It’s no accident that this announcement came right after the FDA said it will not regulate AI-assisted medical devices.
OpenAI launches ChatGPT Health, encouraging users to connect their medical records
January 8, 2026 at 9:33 AM
Uff. There is no moral arc bending inevitably toward justice, only hard-fought gains at personal and institutional risk. We need a rebirth of solidarity networks and celebration of the sort of defiance that ultimately defanged the HUAC. Demand courage from leaders and build union power to compel it.
January 6, 2026 at 5:46 PM
This rampant journalistic malpractice is a product of a well-financed multi-year campaign to blur and misrepresent agency and accountability. It has been abetted by faculty who should know better and by corporatized university administrators. We cannot afford apathy or polite excuses on this issue.
This is a thread of major media outlets falsely anthropomorphising the "Grok" chatbot program and in doing so, actively and directly removing responsibility and accountability from individual people working at X who created a child pornography generator (Elon Musk, Nikita Bier etc)

#1: Reuters
January 2, 2026 at 8:55 PM
Reposted by Jed Brown
just so it's clear: this is the exact core function of genAI

Fascist government and climate deniers love it because it can produce the aesthetics of knowledge without any actual tendency towards truth. It is automated denialism.

Shame on every climate scientist promoting its use (there are lots)
Bob and I were among the hundreds of researchers that were supposed to conduct the 6th U.S. National Climate Assessment. Now it looks like they're gonna produce it with a few people and Grok? Communities need rigorous and accurate information about climate change. This will put communities at risk.
January 1, 2026 at 7:11 AM
The obvious way to incorporate ads is via RAG to put ad-copy into the context, but that will taint the entire response (sometimes to comedic/harmful effect). They're going to try to circumvent FTC rules on native advertising that would require labeling the entire response
www.ftc.gov/system/files...
December 31, 2025 at 7:00 AM
Reposted by Jed Brown
Something to consider:

1. rarehistoricalphotos.com/doctors-smok...

2. Academic Collaborations and Public Health: Lessons from Dutch Universities' Tobacco Industry Partnerships for Fossil Fuel Ties doi.org/10.5281/zeno...

1/n 🧵
December 30, 2025 at 6:48 AM
Reposted by Jed Brown
THIS THIS THIS. ALL OF THIS

THIS is why faculty resist technological strategies for teaching. There is no engaging with Edtech without this context
December 30, 2025 at 1:50 AM
Reposted by Jed Brown
The obsession with creating Black people that can't tell you no and then distributing them into the world to say and demonstrate whatever you want is REAL.
“In the online version of the program, the same strategies are taught to the first-year students, but the AI-avatars are pre-recorded, meaning they function more like recorded lectures rather than live, interactive instructors.”
AI avatars have arrived at the University as teaching assistants
By utilizing the AI avatars to teach new sections of the class, the program expanded this past Fall semester.
www.cavalierdaily.com
December 29, 2025 at 9:47 PM
Reposted by Jed Brown
"Calls for unionization and collective action may not be novel, but that’s what is needed. Let us find our allies across the vast labor force of education."
December 29, 2025 at 3:16 PM
Reposted by Jed Brown
"There is no condition in which the university is under attack that the whole of U.S. educational infrastructure, public and private, from pre-K up, is not under attack. And there is no condition in which education is under attack in which civil society can remain unimperiled."
December 29, 2025 at 2:18 PM
Reposted by Jed Brown
The images used, and artists represented, by Weingarten here without consent, attribution or compensation have been subject of _multiple_ Supreme Court cases establishing that their work cannot be exploited without permission. And teachers’ IP rights is a key issue unions need to be fighting for!
This one was such fun… so I am sharing
December 29, 2025 at 4:51 AM
I'd like to propose the following norm for peer review of papers. If a paper shows clear signs of mechanized plagiarism, the paper should be immediately rejected, the authors' institution should be notified, and other potential victims should be notified.
doi.org/10.24318/cop...
December 29, 2025 at 12:50 AM
Reposted by Jed Brown
Good article, but I think we could add that many of the current government officials & influencers spreading political point-scoring falsehoods during crisis events HAVE BECOME gov officials & influencers BECAUSE of their bullshit-spreading talents. System effects of rotten attention dynamics.
Government Officials Once Stopped False Accusations After Violence. Now, Some Join In.
www.nytimes.com
December 27, 2025 at 12:46 AM
Reposted by Jed Brown
This is just un-f-ing believable.

"We have brought you here today to be our science expert," says the reporter to the guy who is "a scientist turned technologist who is developing A.I. tools for scientific research through his nonprofit ... and a for-profit spinoff."
Where Is All the A.I.-Driven Scientific Progress?
www.nytimes.com
December 26, 2025 at 10:10 PM
Reposted by Jed Brown
EAs love malaria nets because it's supposedly the intervention where we have the "most statistical evidence" of it's effectiveness. One kind of silly fact about this tho is that if you read Duflo's actual paper, the choice to do the experiments on bed nets as an intervention is pretty much arbitrary
can someone who's not a weirdo catch me up on why effective altruists are uniquely obsessed with malaria nets?
December 26, 2025 at 4:21 PM
Reposted by Jed Brown
It is EXHAUSTING not only being made responsible for coming up with new kinds of assignments for our students; it's also tedious reading op-eds that suggest the core problem is a crisis in teaching. But, as Chris and I lay out here, this isn't a crisis in teaching; it's an attack on learning.
"We envision a resistance that is...a repudiation of the efficiencies that automated algorithmic education falsely promises: a resistance comprising the collective force of small acts of friction."

"How to Resist AI in Education" by me & @cnygren.bsky.social
www.publicbooks.org/four-frictio...
Four Frictions: or, How to Resist AI in Education - Public Books
We are calling for resistance to the AI industry’s ongoing capture of higher education.
www.publicbooks.org
December 24, 2025 at 7:39 PM
Reposted by Jed Brown
Just like Klarna who fired many people because they thought AI can solve everything, the hype they swallowed was met with the reality that it just doesn't work well.

Same lesson different company. As the AI hype and bubble pop, we will see more of these.
timesofindia.indiatimes.com/technology/t...
After laying off 4,000 employees and automating with AI agents, Salesforce executives admit: We were more confident about…. - The Times of India
Tech News News: Salesforce, one of the world's most valuable enterprise software companies, is pulling back from its heavy reliance on large language models after enc.
timesofindia.indiatimes.com
December 23, 2025 at 12:05 PM
Reposted by Jed Brown
I read a thread a few months ago (apols to the OP) pointing out that MOOCs succeeded insofar as institutions now often claim instructor IP; lectures are recorded; classes are modularised, outcome-focused & 'supported' by generic 'help' resources... MOOC thinking accelerated HE neoliberalism.
December 24, 2025 at 4:13 AM
Reposted by Jed Brown
Thanks for sharing this document, @rweingarten.bsky.social.

I read it, and I believe it's inexcusably inadequate and will not prepare teachers or schools to confront the very real dangers of the "A.i." products pushed by your partners, including OpenAI.

A few thoughts for your consideration...
Read about Commonsense Guardrails for Using Advanced Technology in Schools aiinstruction.org/sites/defaul...
aiinstruction.org
December 24, 2025 at 3:43 AM
If a software product interacs in a way that imitates a human, the law should consider it an agent of the company just like a human employee, with the people who deploy it held accountable in the same way.
December 23, 2025 at 10:40 PM