Computational Cognitive Science
banner
compcogsci.bsky.social
Computational Cognitive Science
@compcogsci.bsky.social
Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University
Reposted by Computational Cognitive Science
You think LLM-based chatbots can help students learn? Think again.

Take a few minutes to listen to Dr. @drtanksley.bsky.social clear explanation why this is very bad, harmful idea, especially for Black students.

#Critical_AI_Literacy

www.youtube.com/watch?v=5mtc...
Howard University AI Panel
YouTube video by Tiera Tanksley
www.youtube.com
November 22, 2025 at 7:56 PM
Reposted by Computational Cognitive Science
LLM = a lossy database
I don't know which paper this is an extract for. But, it's just a lossy database essentially. There's no learning in any sense other than metaphorically, but the anthropomorphism is to sell products not to help explain. 1/n
A new paper argues that current generative AI tools offer little benefit for genuine learning unless students already have substantial prior knowledge. genAI gives probabilistic summaries, not the kind of support that builds expertise.
November 23, 2025 at 11:59 AM
Reposted by Computational Cognitive Science
"It is noteworthy here that in all likelihood, the wealthy would still continue to receive high-quality personal instruction while the less wealthy would be taught by these potentially problematic LLMs due to resource constraints." — me and @samhforbes.bsky.social

☹️
November 23, 2025 at 3:11 PM
Reposted by Computational Cognitive Science
The same applies to medicine.
Rich people will have human doctors.
Poor people will have LLMs
"It is noteworthy here that in all likelihood, the wealthy would still continue to receive high-quality personal instruction while the less wealthy would be taught by these potentially problematic LLMs due to resource constraints." — me and @samhforbes.bsky.social

☹️
They love to say it's a way to help underserved groups of children too. Extra scary.

Forbes, S. H. & Guest, O. (2025). To Improve Literacy, Improve Equality in Education, Not Large Language Models. Cognitive Science. doi.org/10.1111/cogs...

PDF: philpapers.org/archive/FORT...
November 23, 2025 at 3:33 PM
Reposted by Computational Cognitive Science
Perhaps counterintuitive but guardrails are full-blown cognition in the case of models that contain data from the web which obviously also contains inappropriate content. Only human cognition at that point can sort this data into appropriate for a child or not.

www.ru.nl/en/research/...

3/n
Don’t believe the hype: AGI is far from inevitable | Radboud University
Will AI soon surpass the human brain? If you ask employees at OpenAI, Google DeepMind and other large tech companies, it is inevitable. However, researchers at Radboud University and other institutes ...
www.ru.nl
November 17, 2025 at 6:00 AM
Reposted by Computational Cognitive Science
cool pincer movement if you truly grasp:

AI & any concept relating to it like so-called guardrails are a scam in the deepest sense like a perpetual motion machine or a quija board — and not only a scam like a pyramid scheme which is a possible way to make money if you are first in first out

🧵

1/n
"Powered by OpenAI’s GPT-4o model by default...tests repeatedly showed that the AI toy dropped its guardrails the longer a conversation went on, until hitting rock bottom on incredibly disturbing topics."
AI-Powered Stuffed Animal Pulled From Market After Disturbing Interactions With Children
FoloToy says it's suspended sales of its AI-powered teddy bear after researchers found it gave wildly inappropriate and dangerous answers.
futurism.com
November 17, 2025 at 5:51 AM
Reposted by Computational Cognitive Science
"Ultimately, the collective strategy of AI companies threatens to deskill precisely those people who are essential for society to function. (...) automation of knowledge and culture by private companies is a worrying prospect – conjuring dystopian and outright fascistic scenarios."
“While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype and stolen intellectual labor. In reality, AI erodes academic freedom, weakens critical reading, and subordinates the pursuit of knowledge to corporate interests.”
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
November 16, 2025 at 8:16 PM
Reposted by Computational Cognitive Science
No paywall here: archive.ph/ZHGCm
November 17, 2025 at 12:54 PM
Reposted by Computational Cognitive Science
Writing with Olivia Guest, Iris van Rooij (both professors of Computational and Cognitive Science) with a punchy polemic about the smoke, mirrors and profound societal dangers they see in the unregulated rush to adopt AI.
“While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype and stolen intellectual labor. In reality, AI erodes academic freedom, weakens critical reading, and subordinates the pursuit of knowledge to corporate interests.”
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
November 17, 2025 at 12:29 PM
Reposted by Computational Cognitive Science
We reject the use of generative artificial intelligence for reflexive qualitative research

by Tanisha Jowsey, Virginia Braun, Victoria Clarke, Deborah Lupton, Michelle Fine :: SSRN papers.ssrn.com/sol3/papers....
<span>We reject the use of generative artificial intelligence for reflexive qualitative research</span>
We write as 416 experienced qualitative researchers from 38 countries, to reject the use of generative artificial intelligence (GenAI) applications for Big Q Qu
papers.ssrn.com
November 17, 2025 at 8:36 PM
Reposted by Computational Cognitive Science
Yet again: can we accept that the current technology of generative AI is an inherently fascistic product, and should be treated as such?

Just as fascism is not inevitable, neither is the invasion of genAI into every aspect of society - especially education and academia.
“While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype and stolen intellectual labor. In reality, AI erodes academic freedom, weakens critical reading, and subordinates the pursuit of knowledge to corporate interests.”
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
November 18, 2025 at 12:11 AM
Reposted by Computational Cognitive Science
November 18, 2025 at 8:01 AM
Reposted by Computational Cognitive Science
"Critical washing" is "encouraging AI use while being 'aware of the risks'".

Page 7
November 16, 2025 at 4:15 PM
Reposted by Computational Cognitive Science
We have a name for that:
Critical Washing
bsky.app/profile/adol...
"Critical washing" is "encouraging AI use while being 'aware of the risks'".

Page 7
November 18, 2025 at 12:29 PM
Reposted by Computational Cognitive Science
November 18, 2025 at 9:50 PM
Reposted by Computational Cognitive Science
November 18, 2025 at 10:07 PM
Reposted by Computational Cognitive Science
“While the AI industry claims its models can “think,” “reason,” and “learn,” their supposed achievements rest on marketing hype and stolen intellectual labor. In reality, AI erodes academic freedom, weakens critical reading, and subordinates the pursuit of knowledge to corporate interests.”
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
November 15, 2025 at 5:24 PM
Reposted by Computational Cognitive Science
just sharing here as I was reminded it's on github, in case people wanna make copies: github.com/oliviaguest/...

the cogsci hexagon: A visual depiction of the connections between the Cognitive Sciences
November 16, 2025 at 2:16 PM
Reposted by Computational Cognitive Science
i can't possibly promote this work enough. it's is packed with wisdom and clarity and everyone should read it.

— from two astounding people @olivia.science and @irisvanrooij.bsky.social that you should follow here.

worth your time.👇

www.project-syndicate.org/commentary/a...
AI Is Hollowing Out Higher Education
Olivia Guest & Iris van Rooij urge teachers and scholars to reject tools that commodify learning, deskill students, and promote illiteracy.
www.project-syndicate.org
November 16, 2025 at 10:19 PM
Reposted by Computational Cognitive Science
Scientists and scholars in AI and its social impacts call on von der Leyen to retract #AIHype statement.

@olivia.science
@abeba.bsky.social
@irisvanrooij.bsky.social
@alexhanna.bsky.social
@rocher.lc
@danmcquillan.bsky.social
@robin.berjon.com
& many others have signed

www.iccl.ie/press-releas...
Scientists call on the President of the European Commission to retract AI hype statement
Experts in AI call on the President of the European Commission to retract unscientific AI hype statement she made in the budget speech.
www.iccl.ie
November 10, 2025 at 9:48 AM
Reposted by Computational Cognitive Science
Oh duh how could I forget to reference. Read the whole thing, it's excellent.

Guest, O. et al (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. doi.org/10.5281/zeno...
Against the Uncritical Adoption of 'AI' Technologies in Academia
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...
doi.org
November 9, 2025 at 10:05 AM
Reposted by Computational Cognitive Science
Our resources page has been updated with @danmcquillan.bsky.social's Resisting AI book (he researched Gen AI as an instrument for fascism before a lot of us knew!) and @olivia.science's Critical AI website full of her academic research on the harms.
stopgenai.com/related-link...
November 13, 2025 at 3:06 PM
Reposted by Computational Cognitive Science
I signed it, what about you?

Please consider doing so. Thanks.
November 10, 2025 at 6:31 PM
Reposted by Computational Cognitive Science
“Researching and reflecting on the harms of AI is not itself harm reduction. It may even contribute to rationalizing, normalizing, and enabling harm. Critical reflection without appropriate action is thus quintessentially critical washing."

-- @marentierra.bsky.social et al, (2025).
Critical AI Literacy: Beyond hegemonic perspectives on sustainability
How can universities resist being coopted and corrupted by the AI industries’ agendas? Originally published here: https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic
zenodo.org
November 14, 2025 at 7:16 AM
Reposted by Computational Cognitive Science
Thanks @marentierra.bsky.social !
It is so important to emphasise this. I am constantly arguing with my colleagues who do not seem to understand this point while happily writing books on AI and education. They have an "idealised middle ground" view on this issue, which is deeply infuriating.
November 14, 2025 at 10:03 AM