Zahid R Chaudhary
@zahidrc.bsky.social
160 followers 110 following 16 posts
Princeton prof / Marx / Freud / “post-truth” politics
Posts Media Videos Starter Packs
Pinned
zahidrc.bsky.social
Publication date: Nov 4, 2025 🎉
Reposted by Zahid R Chaudhary
kevinmkruse.bsky.social
As a reminder, when Jack Posobiec was in the US Navy, they assessed his full capabilities and concluded that he was best suited to collecting samples for urinalysis
justinbaragona.bsky.social
Incredible.

Jack Posobiec references the earliest version of antifa -- the anti-fascists in the Weimar Republic who were opposed to the Nazi Party -- as the bad guys.
Reposted by Zahid R Chaudhary
annakornbluh.bsky.social
repealing the right to culture
nehafge3403.bsky.social
The cuts at the National Endowment for the Humanities go far deeper than any of these agencies - almost 70% of 179 people were terminated, despite no change to the agency's budget.

There were no savings here - only idealogical destruction. #NEH
conradhackett.bsky.social
How much smaller the federal workforce is now vs. a year ago
Ed Dept 42%
OPM 33%
HUD 31%
Treasury 29%
Defense (civilian jobs) 22%
Small Biz Admin 21%
Energy 20%
Interior 15%
HHS 14%
SSA 11%
EPA 10%
www.nytimes.com/interactive/...
zahidrc.bsky.social
Fascinating piece—worth reading
clairelwilmot.bsky.social
I’ve been struck by how the British far right is using deepfake technology — less to deceive about specific events, more to tap into fascistic affects & desires. This is really frightening.

My dispatch from grim corners of the Internet, for @lrb.co.uk online.

www.lrb.co.uk/blog/2025/se...
Claire Wilmot | Fascistic Dream Machines
Part of the misunderstanding of the deepfake threat stems from the idea that it is a problem of bad information, rather...
www.lrb.co.uk
Reposted by Zahid R Chaudhary
bostonreview.bsky.social
Bertrand Russell once received letters from Sir Oswald Mosley, the founder of the British Union of Fascists, inviting him to a debate. Russell not only declined but replied quite generally that “nothing fruitful or sincere could ever emerge from association between us.” @olufemiotaiwo.bsky.social
How Can We Live Together? - Boston Review
Ezra Klein is wrong: shame is essential.
www.bostonreview.net
Reposted by Zahid R Chaudhary
rbreich.bsky.social
The richest man on earth owns X.

The second richest man on earth is about to be a major owner of TikTok.

The third richest man owns Facebook, Instagram, and WhatsApp.

The fourth richest man owns The Washington Post.

See the problem here?
zahidrc.bsky.social
Yes—“politics of exposure” in the journal ELH, and “paranoid publics” in the journal History of the Present
Reposted by Zahid R Chaudhary
zohrankmamdani.bsky.social
"How Are the Very Rich Feeling About New York’s Next Mayor?"

A Dramatic Reading of The Recent New York Times Dispatch from the Hamptons.

Presented by The Gilded Age's Morgan Spector.
Reposted by Zahid R Chaudhary
dael.bsky.social
I’m not a member of the AHA,
but maybe those that are could ask leadership some questions about why the organization is posting job ads for a historian in ICE’s “Human Rights Violator Law Division”?

careers.historians.org/jobs/2164421...
Historian in Washington, DC for Human Rights Violator Law Division, U.S. Immigration and Customs Enforcement
Exciting opportunity in Washington, DC for Human Rights Violator Law Division, U.S. Immigration a...
careers.historians.org
Reposted by Zahid R Chaudhary
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Zahid R Chaudhary
samwang.bsky.social
And this is a big fat reason to join the AAUP. Faculty, postdocs, graduate students, join - and support academic freedom everywhere! www.aaup.org/join
Reposted by Zahid R Chaudhary
hzeavin.bsky.social
I'm on this search committee too-- happy to answer questions etc.
carlosfnorena.bsky.social
NEW: Tenure-track position in History at UC Berkeley in the GLOBAL HISTORY OF TECHNOLOGY.

We are casting a wide net here: *all* periods, places, and fields are under consideration.

I'm on the search committee, so do let me know if you have questions.
Assistant Professor – Global History of Technology - Department of History
University of California, Berkeley is hiring. Apply now!
aprecruit.berkeley.edu
Reposted by Zahid R Chaudhary
will-davies.bsky.social
An intrinsically hallucinatory technology carried along by an intrinsically hallucinatory financial-media complex
maxread.info
is the "a.i." "bubble" "bursting"? or, four ways to consider a vibe shift maxread.substack.com/p/is-the-ai-...
Eons ago, I wrote a piece called “The A.I. backlash backlash,” about a pendulum swing, then occurring in what I suppose we’d call “the discourse,” against a previously dominant cycle of A.I. backlash (which itself was a reaction to a dominant cycle of A.I. hype that dated back to the debut of ChatGPT). At the time, L.L.M. chatbots had improved significantly over the preceding 18 months; many people had managed to incorporate A.I. into their work in ways that seemed useful to them; and the “vibe” in Silicon Valley, as New York Times columnist Kevin Roose wrote had the time, had “shifted” to anticipate so-called “artificial general intelligence” on a short timeline. In the hothouse hubs of A.I. Discourse (X.com, Substack, Bluesky), the hype was bubbling up, and the skeptics and critics seemed to be in retreat.

But, as the man says, Want to feel old? That was March. In the five months since Ezra Klein wrote in his Times column that “person after person… has been coming to me saying… We’re about to get to artificial general intelligence,” Meta has announced efforts to reorganize and downsize its A.I. division; NVIDIA’s “tepid” revenue forecast is suggesting a wide slowdown; Sam Altman is warning that “investors as a whole are overexcited about AI”; and Gary Marcus, prince of L.L.M. haters, is on his fifth or sixth victory lap. The renewed hype has sputtered; the most fervent enthusiasts have become disillusioned; critics reign triumphant: The backlash to the backlash to the backlash has arrived.

But does this mean--as many recent headlines would have it--that the “A.I. bubble is popping”? The answer depends, annoyingly, on how you define “A.I.,” and how you define “bubble,” and, also, how you define “popping.”

Is the “A.I.” of “A.I. bubble” the entire field of machine learning? Only large language models? Only chatbots? The implementation thereof into pre-existing software? And is the “bubble” of “A.I. bubble” excessive equity valuations? Inflated expectations for or faith in L.L.M. performance? Excessive industry and management directives around A.I. use? Too many annoying guys on X.com talking too much about “A.I.”?

And, maybe most importantly, what would it mean for it to “pop”? A stock market crash and a recession? A few V.C.s losing their shirts? An “A.I. winter”? Google removing “A.I. Overview” and a reversion to the pre-L.L.M. web? Fewer annoying guys on X.com?
Reposted by Zahid R Chaudhary
misterjabsticks.bsky.social
Fascinating resource for educators who are thinking about ways to have viable assignments without AI.

Run by @annakornbluh.bsky.social and @ehayot.bsky.social

against-a-i.com
against-a-i.com
Reposted by Zahid R Chaudhary
bildoperationen.bsky.social
Gavin Newsom's strategy of centrist liberal «populism» seems to involve wholeheartedly embracing AI slop. However, rather than demonstrating a possible «progressive» use of generative AI, this example shows how quickly such a use can lead to the uncritical adoption of fascist aesthetics
1/
Screenshot from x.com
The account «Governor Newsom Press Office» retweeting an AI image of the Newsom with absurdly swelled muscles and an American flag, in the Style of a propaganda painting. The original post says «IN GAVIN WE TRUST», and the Press Office comments: «AN HONOR! THANK YOU!)
Reposted by Zahid R Chaudhary
Reposted by Zahid R Chaudhary
keckb33.bsky.social
"The assumptions and conclusions of the Great Barrington Declaration were wrong in 2020 and are still wrong in 2025. When Macedo and Lee were challenged on the substance of their critiques by actual epidemiologists and clinicians in a prominent literary journal, the Boston Review, they dug in."