Jed Brown
banner
jedbrown.org
Jed Brown
@jedbrown.org
Prof developing fast algorithms, reliable software, and healthy communities for computational science. Opinions my own. https://hachyderm.io/@jedbrown

https://PhyPID.org | aspiring killjoy | against epistemicide | he/they
"Chatbots are routinely breaching the ethical standards that humans are normally held to."

It is often asked how often organic prompting returns near-verbatim content in the responses. This preprint shows it's very common, especially with expository writing and code.

arxiv.org/abs/2411.10242
January 10, 2026 at 12:37 AM
Great contextualization of this work. When we let financial interests choose terminology and accept corporate testimony as though it were an honest and accurate depiction of the technology, we are perpetuating a lie to the public and abetting bad court rulings.
January 10, 2026 at 12:22 AM
Uff. There is no moral arc bending inevitably toward justice, only hard-fought gains at personal and institutional risk. We need a rebirth of solidarity networks and celebration of the sort of defiance that ultimately defanged the HUAC. Demand courage from leaders and build union power to compel it.
January 6, 2026 at 5:46 PM
Good piece, yet every time someone compromises to seem reasonable and not absolutist, they ascribe capability these models don't have. LLMs don't represent facts, even when trained on purely-factual narration, thus can't provide "world-building facts and continuity", only inconsistent counterfeits.
January 4, 2026 at 8:59 PM
Right, the "innovation" is circumventing reliable sources while deceiving people that learning occurred, providing only an bigoted cocktail-party forgery of knowledge. It is offensive and serves no pedagogical function to "chat with" pacifist "John Brown" or Nazi-excusing "Anne Frank".
January 1, 2026 at 5:58 PM
The obvious way to incorporate ads is via RAG to put ad-copy into the context, but that will taint the entire response (sometimes to comedic/harmful effect). They're going to try to circumvent FTC rules on native advertising that would require labeling the entire response
www.ftc.gov/system/files...
December 31, 2025 at 7:00 AM
"Explainable AI" is paradoxical by design. If you build a system to be explainable (e.g., by solving well-defined equations), it makes it incompatible with "AI" culture even if it meets technical criteria, and "AI" researchers and program managers will agree that your model/data assimilation is not.
December 30, 2025 at 6:38 AM
We should interpret these declarations as evidence of intent to commit unlicensed practice of medicine. If attorneys general believe they can't prosecute it as such because "AI" is exploiting a legal loophole, legislation should close that loophole.
leginfo.legislature.ca.gov/faces/codes_...
December 29, 2025 at 8:43 PM
I appreciate @danielhemel.bsky.social 's analysis of wealth/capital gains tax options.

chicagounbound.uchicago.edu/public_law_a...
December 29, 2025 at 5:04 AM
If a software product interacs in a way that imitates a human, the law should consider it an agent of the company just like a human employee, with the people who deploy it held accountable in the same way.
December 23, 2025 at 10:40 PM
Thanks for writing and re-upping this. I think terminology is important, but I don't think regulating based on technical or statistical properties (e.g., "generative" or "transformer") is appropriate. I think the mode of interaction and epistemic function is more important for regulation.
December 23, 2025 at 2:06 AM
And the same pattern with cancer research. These people aren't interested in reducing suffering, they find it rhetorically useful for their empire-building.
December 23, 2025 at 1:22 AM
All across science, we have models with parameters. Some parameters are known a priori and some are calibrated using data. As we study any given phenomenon, we should be looking to apply both mechanism and data to improve understanding and reliability, with rigorous verification and validation.
December 22, 2025 at 7:59 PM
Good thread. I've been using this slide for those who insist on a technical definition for AI/ML in science. Coming from the perspective of scientific/numerical computing, the colloquial meaning does a lot to exclude (good, "classical") methods. It's deeply frustrating and constraining.
December 22, 2025 at 7:59 PM
Sure, Flock is fundamentally unserious about security and makes false statements about who has access and sure, they're able to read what's on your phone when you walk near their cameras, but they also have patents to automate racial profiling.
patentimages.storage.googleapis.com/77/9a/03/7b3...
December 22, 2025 at 5:31 PM
The problem is broader than "AI" (#NotAllAI), but "AI" is better understood as culture than as well-defined tech. You *can* work on tech without abetting the cultural project, but that requires rejecting tacit assumptions underlying the field.
ali-alkhatib.com/blog/definin... (@ali-alkhatib.com)
December 22, 2025 at 3:39 AM
The colonization by "AI" also displaces people. You'll be shocked to learn that Black women have been among the most excluded, and have provided the clearest voices anticipating and analyzing the harms and supporting power structures.
arxiv.org/abs/2009.14258 (by @abeba.bsky.social @olivia.science)
December 22, 2025 at 3:39 AM
And datacenter opposition is remarkably effective and feels like it's just getting started.
www.datacenterwatch.org/q22025
December 21, 2025 at 4:16 AM
I love how this implies you could trust the output when there isn't a prompt injection. But LLM output is *never* trustworthy. Prompt injection offers a way to cause a *specific* error to be committed, but the systemic errors happen without prompt injection.
December 20, 2025 at 2:51 PM
To record or provide such data would violate the ALA Library Bill of Rights. It's remarkable in today's surveilled society to read the 🔥 2002 Tattered Cover decision in which the CO Supreme Court denied a search warrant for bookstore records of a specific individual.

www.aclu-co.org/news/aclu-ce...
December 20, 2025 at 4:46 AM
There have been a bunch of stories detailing how students are coping with the presumed-guilty stochastic allegations, but so many administrators and teachers have been socialized to reflexively reach for policing no matter how broken it's known to be.

www.nytimes.com/2025/05/17/s...
December 19, 2025 at 6:30 AM
Yeah, the way these products wore is inherently racist and ableist and incapable of being reliable. The self-surveillance and self-doubt are so tragic and contrary to learning.
December 19, 2025 at 6:30 AM
Yikes. Loureiro studied plasmas for magnetically-confined fusion energy (tokamaks, like ITER). It has nothing at all to do with weapons.

en.wikipedia.org/wiki/ITER
December 19, 2025 at 3:31 AM
Judge F Kay Behm explains the fallacy perfectly:

> [T]he fact that Plaintiffs [...] did not “fabricate cases or cite nonexistent decisions” is of no help.

> LLMs do not perform the metacognitive processes that are necessary to comply with Rule 11

law.justia.com/cases/federa...
December 12, 2025 at 5:58 AM
While procuring big raises for themselves, these presidents were dismantling entire departments, firing faculty en masse, and signing expensive contracts with the epistemicide company in defiance of shared governance. Abject betrayal of the university mission.
www.currentaffairs.org/news/ai-is-d...
December 12, 2025 at 5:08 AM