Computational Cognitive Science
@compcogsci.bsky.social
280 followers 150 following 12 posts
Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University
Posts Media Videos Starter Packs
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“university leaders … must act to help us collectively turn back the tide of garbage software, which fuels harmful tropes (e.g. so-called lazy students) and false frames (e.g. so-called efficiency or inevitability) to obtain market penetration and increase technological dependency”

3/🧵
Against the Uncritical Adoption of 'AI' Technologies in Academia
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...
doi.org
Reposted by Computational Cognitive Science
davidhiggins.bsky.social
"With dismay we witness our university leadership making soulless choices that hollow out our institutions from within and erode the critical and self-reflective fabric of academia".

[from Guest et al., 'Against the Uncritical Adoption of "AI" Technologies in Academia']
ox.ac.uk
NEW: Oxford will be the first UK university to give all staff and students free ChatGPT Edu access, from this academic year.

ChatGPT Edu is built for education, with enhanced privacy and security.
Graphic from the University of Oxford, featuring an image of a glowing, digital brain with the text: 'Generative AI at Oxford'. Highlights that ChatGPT Edu is now available to all staff and students. Includes a link for more information: ox.ac.uk/gen-ai
Reposted by Computational Cognitive Science
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“[Critical AI literacy] is akin to knowledge of how to properly use inferential statistics and thus avoid accidentally being fooled by the results of our experiments in a flawed search for statistical significance.”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“[AI companies] enable and enact misconduct such as fabrication of data, plagiarism, as well as questionable research practises (QRPs)”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“Being able to detect & counteract all these 3 together comprises the bedrock of skills in research methods in a time when AI is used uncritically (see Table 1). The inverse: not noticing these are at play, or even promoting them, could be seen as engaging in questionable research practises (QRPs)”
The three aforementioned related themes sketched out in this section, will play out in the AI-social psychology relationships we will examine — namely:

a. misunderstanding of the statistical models which con- stitute contemporary AI, leading to inter alia thinking that correlation implies causation (Guest, 2025; Guest & Mar- tin, 2023, 2025a, 2025b; Guest, Scharfenberg, & van Rooij, 2025; Guest, Suarez, et al., 2025);

b. confusion between statistical versus cognitive models when it comes to their completely non-overlapping roles when mediating between theory and observations (Guest & Martin, 2021; Morgan & Morrison, 1999; Morrison & Morgan, 1999; van Rooij & Baggio, 2021);

c. anti -open science practices,such as closed source code, stolen and opaque collection and use of data, obfuscated conflicts of interest, lack of accountability for models’ architectures, i.e. statistical methods and input-output mappings are not well documented (Barlas et al., 2021; Birhane & McGann, 2024; Birhane et al., 2023; Crane, 2021; Gerdes, 2022; Guest & Martin, 2025b; Guest, Suarez, et al., 2025; Liesenfeld & Dingemanse, 2024; Liesenfeld et al., 2023; Mirowski, 2023; Ochigame, 2019; Thorne, 2009).
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
What an amazing zine 🤩 Check it out! Thank you for this work @cgsunit.bsky.social 🙏✨
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."
Reposted by Computational Cognitive Science
holdspacefree.bsky.social
1/ I'm only through abstract, crying. Ive felt so alone & nearly (but not) crushed by the organizational & $ power pushing #AIHype & its surveillance, leaving consent & respect behind. #CrisisTextLine #TheTrevorProject #988Lifeline #SAMHSA #NIH #Lyssn #Oxevision #LIO #Nabla.ai #KaiserPermanente #EHR
irisvanrooij.bsky.social
Check this out!
🌟 👇 👍 🧪
psyarxivbot.bsky.social
Critical Artificial Intelligence Literacy for Psychologists: https://osf.io/dkrgj
Reposted by Computational Cognitive Science
holdspacefree.bsky.social
2/ I am SO GRATEFUL to the authors for their integrity, care, persistence, and brilliance. Constantly under attack & trolled, but I see them selectively & effectively flip trolls back on themselves. Still that's not fun or easy. And they don't let it stop their work, because it's a growing legacy.
Reposted by Computational Cognitive Science
holdspacefree.bsky.social
3/ #SuicideResearch feed PLEASE READ the paper (pre-print linked above). Please, NEVER participate in "predictive" behavioral health research based on amassed data. It's so offensive. Many in the feed need no reminder ❤️. I wrote more here, example from #EHR
reformcrisistextline.com/suicide-pred...
Suicide Prediction Models, What’s Missing? Part 1. Consent - Reform Crisis Text Line
reformcrisistextline.com
Reposted by Computational Cognitive Science
holdspacefree.bsky.social
4/ Mainly wanting to express my appreciation. I have so many unwritten papers to call things out, that i just cant get to, and i may never get to. Having a theoretical framework like this is SO IMPORTANT.

For-profit "care" systems + #AIHype = MASSIVE bias to discard consent & respect for persons.
Reposted by Computational Cognitive Science
savasavasava.myatproto.social
so so grateful for all of @olivia.science's incredible work.
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
markrubin.bsky.social
"In this paper, we unpacked why we think psychologists need to be on high alert — not just to avoid another replication crisis, but to avoid the total collapse of our science."
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
Ran into this poster on our campus today 😊
Poster of the Open Letter: Stop the uncritical adoption of ‘AI’ technologies in academia, with QR code
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“AI is not unstoppable. Besides, if we contribute to its colonization of the university, we contribute to eroding the future and deskilling “students and ourselves,” enhancing environmental destruction.” — @saramartin.bsky.social

webs.uab.cat/saramartinal... @olivia.science
WHY WE NEED TO BE WARY ABOUT INTRODUCING AI INTO OUR TEACHING AND RESEARCH: COMMENTING ON GUEST ET AL. – Sara Martín Alegre
webs.uab.cat
Reposted by Computational Cognitive Science
olivia.science
"The authors see the direct link between fascism and AI, for fascistic regimes always attack academic integrity and prefer voters to be uneducated and uncritical. I find the analogy with tobacco most pertinent: knowing of the mortal risks we can now choose to smoke or not at our own peril." ‼️
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
Social psychologists can know better than anyone in psychology that we do not want to let QRPs ruin trust in our science, again.

“Ultimately, contemporary AI is research misconduct.”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
🌟 New preprint 🌟, by @olivia.science and me:

📝 Guest, O., & van Rooij, I. (2025). *Critical Artificial Intelligence Literacy for Psychologists*. doi.org/10.31234/osf...

🧪
Table 1

Core reasoning issues (first column), which we name after the relevant numbered section, are characterised using a plausible quote. In the second column are responses per row; also see the named section for further reading, context, and explanations.

See paper for full details: ** Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
cloudquistador.bsky.social
Excellent. Psychological research is an essential part of our collective toolkit (as is, I insist, psychoanalytical research). As @olivia.science says, these must be protected from the 'AI' onslaught. Lacan observed that psych, under capitalism, often led to the jailhouse. 'AI' is a force multiplier
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
rjbinney.bsky.social
“Ultimately, contemporary AI is research misconduct”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
richarddmorey.bsky.social
Also - contrast b/w the response when I advocate teaching R instead of SPSS -- "No hurry, let's not rush into it" (still waiting) -- & others re: use of LLMs -- "It's inevitable, we're behind; need it implement it ASAP!" -- is telling. Learning to code is freeing. Overhyped LLMs create dependency.
Excerpt from Guest & van Rooij, 2025:

As Danielle Navarro (2015) says about shortcuts through us-
ing inappropriate technology, which chatbots are, we end up dig-
ging ourselves into “a very deep hole.” She goes on to explain:

"The business model here is to suck you in during
your student days, and then leave you dependent on
their tools when you go out into the real world. [...]
And you can avoid it: if you make use of packages
like R that are open source and free, you never get
trapped having to pay exorbitant licensing fees." (pp.
37–38)