M.J. Crockett
banner
mjcrockett.bsky.social
M.J. Crockett
@mjcrockett.bsky.social
Professor of Psychology & Human Values at Princeton | Cognitive scientist curious about technology, narratives, & epistemic (in)justice | They/She 🏳️‍🌈
www.crockettlab.org
nature careers stop publishing unhinged AI advice challenge
November 11, 2025 at 10:29 PM
I'm wondering why ChatGPT is currently the #1 result in the Apple App Store for a "chatbot therapist" search (on my phone at least), what with all the suicides and OpenAI's updated ToS that says "don't use this as a therapist"
November 11, 2025 at 10:20 PM
Oof 😂
November 11, 2025 at 3:09 AM
Pop-up AI defacing your article critical of AI captures a lot of what it feels like to work in this space.
@lmesseri.bsky.social
November 7, 2025 at 3:47 PM
This is especially rich given that Wiley's "AI Guidelines for Researchers" makes a point of urging authors to "protect your content while using AI". What they mean is: "make sure no other AI company gets hold of your IP, because then we can't use it ourselves."

www.wiley.com/en-us/publis...
November 3, 2025 at 6:58 PM
Spotted on LinkedIn... a bad AI summary of our new paper on risks of AI in research.

Please make it stop.
October 23, 2025 at 9:58 PM
I love @abeba.bsky.social's characterization of “critique as service”. I hope this work will be received in that spirit. 19/19
October 21, 2025 at 8:24 PM
When we make claims about broad cognitive processes based on studies of DEAD cognition, this too is an illusion of generalizability. LLMs might mimic (WEIRD) human performance on DEAD tasks, but this doesn’t mean they’re a good model of “human cognition”. 8/
October 21, 2025 at 8:24 PM
As we move from observing cognition in the world, to the lab, to computerized tasks, to online platforms, the targets of those observations are narrowed to cognition that is DEAD: Decontextualized, Engineered, Anonymized and Disembodied. 7/
October 21, 2025 at 8:24 PM
Second, individual laboratory experiments probe an insufficient range of stimuli and contexts to defend broad claims about cognition and behavior. Some describe this as a “generalizability crisis”. We trace the history of this critique... 6/
October 21, 2025 at 8:24 PM
First, cognitive science is WEIRD: our participants aren’t sufficiently diverse to generalize across populations. An illusion of generalizability arises when we believe studies of WEIRD participants can generalize to all humans. 4/
October 21, 2025 at 8:24 PM
Glad Science collected this data (though the results are entirely unsurprising). GenAI cannot accurately summarize scientific papers, sacrificing accuracy for simplicity.

And shame on publishers who are pushing genAI summaries on readers. Great way to accelerate an epistemic apocalypse.
September 21, 2025 at 2:37 PM
A company exploiting loneliness to sell its product also a one-stop shop for tools that detect AI deception *and* evade said detection
May 25, 2025 at 12:16 PM
Next week! I'm excited to host @adambecker.bsky.social in conversation with Catherine Clune-Taylor and Allison Carruth. Co-sponsored by Princeton's UCHV and CITP. Free and open to the public!

uchv.princeton.edu/events/scien...
April 18, 2025 at 3:32 PM
April 18, 2025 at 1:25 PM
Poppies from my neighbour 🥰
April 15, 2025 at 9:52 PM
Key insights from @cbarrie.bsky.social et al. 👇

Given how many papers have already been preprinted/published using proprietary models, this is very concerning.

“State-of-the-art products are extremely fragile in replication terms and subject to forces well beyond a given researcher’s power.”
February 22, 2025 at 9:26 PM
LLM transparency has many dimensions. Two key ones are: do we know what's in the training data? And can we inspect the code and run it on our local network? LLMs vary considerably along these dimensions. "Proprietary" LLMs, which tend to be developed by for-profit companies, fare poorly on both.
February 22, 2025 at 9:26 PM
LLMs have many exciting applications for psychology research; e.g. in my lab we're fine-tuning an LLM to classify moral concepts in news headlines. But we cannot ignore the ethical costs that characterize many LLMs: labor exploitation, intellectual property theft, environmental destruction, & more.
February 22, 2025 at 9:26 PM
We can't talk about LLMs without acknowledging the dire state of science in the US right now, one that is intimately connected to tech billionaires seeking to dismantle higher ed and replace human labor with AI.
February 22, 2025 at 9:26 PM
Politely requesting a third dinner
January 29, 2025 at 12:03 AM
Everything is terrible but here is a snowy boi
January 28, 2025 at 1:49 AM
The irony of being asked to confirm you are not a bot… to watch a video about a bot
December 16, 2024 at 11:55 AM
I posted nearly identical threads on here & X/Twitter about why it's not bad to leave X/Twitter; check out the differences in engagement in the first ~24h. I expected this but not to this degree!
December 3, 2024 at 4:34 PM
Now out in @science.org: misinformation exploits outrage to spread online. www.science.org/doi/10.1126/...

Doing this work was way harder than it had to be, thanks to Big Tech. I want to highlight our lead analyst @killianmcloughlin.bsky.social for his heroic perseverance to bring you this paper 🧵
December 2, 2024 at 3:39 PM