Computational Cognitive Science
@compcogsci.bsky.social
280 followers 150 following 12 posts
Account of the Computational Cognitive Science Lab at Donders Institute, Radboud University
Posts Media Videos Starter Packs
Reposted by Computational Cognitive Science
olivia.science
💛🚫🤖 No AI Gods, No AI Masters 🤖🚫💛

I am massively excited to share the backstory ACADEMIC SHENANIGANS behind our Open Letter (& so this paper below too) — and as always big thanks to my co-authors {@irisvanrooij.bsky.social & @marentierra.bsky.social}:
www.civicsoftechnology.org/blog/no-ai-g...

1/n
Reposted by Computational Cognitive Science
olivia.science
"ChatGPT burst onto a damaged academic scene, because facial recognition software was already being used for surveillance and so-called predictive policing, e-proctoring was already enabling us to spy on our students, and self-driving cars were already a couple of years away for about a decade." 5/n
In the academic year 2022/2023, ChatGPT burst onto an already damaged academic scene. Compromised and eroded because facial recognition software was already being used for surveillance and so-called predictive policing, e-proctoring was already enabling us to spy on our students, and self-driving cars were already a couple of years away for about a decade. In some sense the singularity was already here: our critical thinking was stuck, stale, and stagnant on the exact phraseology that our own Artificial Intelligence Bachelors and Masters programmes was meant to be skeptical of — hype, marketing, and nonsense AI products. This is something we, as seasoned academics, know about from previous AI summers and winters: the false promise of the automated thinking machine to be built in "2 months" (McCarthy et al., 1955, p. 2). For example, Olivia for five years has been teaching students the pre-history of AI and past boom and bust cycles in AI as a Science, in part to try and temper the tide. Each year this got harder as students came with increasingly entrenched beliefs against critically evaluating AI. A situation that was aggravated by our colleagues assigning uncritical reading material authored by non-experts. Additionally, Iris has written several blogposts (van Rooij, 2022, 2023a/b) which prefigure in her reasoning for advancing “critical AI literacy" (CAIL; a term inspired by Rutgers' initiative) — and in proposing that we, as a School of AI, take university-wide responsibility for developing and teaching CAIL. Indeed, Iris teamed up with Barbara to do exactly this.
Reposted by Computational Cognitive Science
kerblooee.bsky.social
"Psychology is meant to study humans, not patterns at the output of biased statistical models." It baffles me this needs to be said, but here we are. There are already viral studies from respected scientists suggesting we can learn something about human cognition from LLMs. Scary & disgraceful.
irisvanrooij.bsky.social
🌟 New preprint 🌟, by @olivia.science and me:

📝 Guest, O., & van Rooij, I. (2025). *Critical Artificial Intelligence Literacy for Psychologists*. doi.org/10.31234/osf...

🧪
Table 1

Core reasoning issues (first column), which we name after the relevant numbered section, are characterised using a plausible quote. In the second column are responses per row; also see the named section for further reading, context, and explanations.

See paper for full details: ** Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
smwordsmith.bsky.social
I have heard almost all of these pro-AI arguments at one point or another. I love these well-reasoned, thoughtful possible responses to each and every one of them. A much more academic way of saying, Kindly, Fuck Off.
irisvanrooij.bsky.social
🌟 New preprint 🌟, by @olivia.science and me:

📝 Guest, O., & van Rooij, I. (2025). *Critical Artificial Intelligence Literacy for Psychologists*. doi.org/10.31234/osf...

🧪
Table 1

Core reasoning issues (first column), which we name after the relevant numbered section, are characterised using a plausible quote. In the second column are responses per row; also see the named section for further reading, context, and explanations.

See paper for full details: ** Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“Being able to detect & counteract all these 3 together comprises the bedrock of skills in research methods in a time when AI is used uncritically (see Table 1). The inverse: not noticing these are at play, or even promoting them, could be seen as engaging in questionable research practises (QRPs)”
The three aforementioned related themes sketched out in this section, will play out in the AI-social psychology relationships we will examine — namely:

a. misunderstanding of the statistical models which con- stitute contemporary AI, leading to inter alia thinking that correlation implies causation (Guest, 2025; Guest & Mar- tin, 2023, 2025a, 2025b; Guest, Scharfenberg, & van Rooij, 2025; Guest, Suarez, et al., 2025);

b. confusion between statistical versus cognitive models when it comes to their completely non-overlapping roles when mediating between theory and observations (Guest & Martin, 2021; Morgan & Morrison, 1999; Morrison & Morgan, 1999; van Rooij & Baggio, 2021);

c. anti -open science practices,such as closed source code, stolen and opaque collection and use of data, obfuscated conflicts of interest, lack of accountability for models’ architectures, i.e. statistical methods and input-output mappings are not well documented (Barlas et al., 2021; Birhane & McGann, 2024; Birhane et al., 2023; Crane, 2021; Gerdes, 2022; Guest & Martin, 2025b; Guest, Suarez, et al., 2025; Liesenfeld & Dingemanse, 2024; Liesenfeld et al., 2023; Mirowski, 2023; Ochigame, 2019; Thorne, 2009).
Reposted by Computational Cognitive Science
emp1.bsky.social
I need to find the time, which, sadly, will not be this week.
Most likely, I will not fully agree with what l *guess* is said here, but chances are that I will concur and sympathise with the main point and the general approach.
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
olivia.science
important on LLMs for academics:

1️⃣ LLMs are usefully seen as lossy content-addressable systems

2️⃣ we can't automatically detect plagiarism

3️⃣ LLMs automate plagiarism & paper mills

4️⃣ we must protect literature from pollution

5️⃣ LLM use is a CoI

6️⃣ prompts do not cause output in authorial sense
5 Ghostwriter in the Machine
A unique selling point of these systems is conversing and writing in a human-like way. This is imminently understandable, although wrong-headed, when one realises these are systems that
essentially function as lossy2
content-addressable memory: when
input is given, the output generated by the model is text that
stochastically matches the input text. The reason text at the output looks novel is because by design the AI product performs
an automated version of what is known as mosaic or patchwork
plagiarism (Baždarić, 2013) — due to the nature of input masking and next token prediction, the output essentially uses similar words in similar orders to what it has been exposed to. This
makes the automated flagging of plagiarism unlikely, which is
also true when students or colleagues perform this type of copypaste and then thesaurus trick, and true when so-called AI plagiarism detectors falsely claim to detect AI-produced text (Edwards, 2023a). This aspect of LLM-based AI products can be
seen as an automation of plagiarism and especially of the research paper mill (Guest, 2025; Guest, Suarez, et al., 2025; van
Rooij, 2022): the “churn[ing] out [of] fake or poor-quality journal papers” (Sanderson, 2024; Committee on Publication Ethics, Either way, even if
the courts decide in the favour of companies, we should not allow
these companies with vested interests to write our papers (Fisher
et al., 2025), or to filter what we include in our papers. Because
it is not the case that we only operate based on legal precedents,
but also on our own ethical values and scientific integrity codes
(ALLEA, 2023; KNAW et al., 2018), and we have a direct duty to
protect, as with previous crises and in general, the literature from
pollution. In other words, the same issues as in previous sections
play out here, where essentially now every paper produced using
chatbot output must declare a conflict of interest, since the output text can be biased in subtle or direct ways by the company
who owns the bot (see Table 2).
Seen in the right light — AI products understood as contentaddressable systems — we see that framing the user, the academic
in this case, as the creator of the bot’s output is misplaced. The
input does not cause the output in an authorial sense, much like
input to a library search engine does not cause relevant articles
and books to be written (Guest, 2025). The respective authors
wrote those, not the search query!
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
We used to speak of ‘sloppy science’ when there were QRPs. Now we have slop science 😔
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
davedun.bsky.social
Fantastic thread (and pre-print) on Critical AI Literacy in Psychology.

The final line from the introduction is brutal: “Ultimately, current AI is research malpractice.”
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
See also our position piece

Guest, O., Suarez, M., Müller, B., et al. (2025). **Against the Uncritical Adoption of 'AI' Technologies in Academia**. Zenodo. lnkd.in/eXDApbxJ @olivia.science @marentierra.bsky.social
Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these termsare orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe.
Reposted by Computational Cognitive Science
olivia.science
Fixing problems for and by tech can totally be seen as pro technology and it's indeed a disgusting and harmful situation, perhaps useful 1/2 bsky.app/profile/oliv...
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Computational Cognitive Science
olivia.science
Also especially 2/2 bsky.app/profile/oliv...
olivia.science
2. the strange but often repeated cultish mantra that we need to "embrace the future" — this is so bizarre given, e.g. how destructive industry forces have proven to be in science, from petroleum to tobacco to pharmaceutical companies.

(Section 3.2 here doi.org/10.5281/zeno...)
4/n
3.2 We do not have to ‘embrace the future’ & we can turn back the tide
It must be the sheer magnitude of [artificial neural networks’] incompetence that makes
them so popular.
Jerry A. Fodor (2000, p. 47)
Related to the rejection of expertise is the rejection of imagining a better future and the rejection
of self-determination free from industry forces (Hajer and Oomen 2025; Stengers 2018; van Rossum
2025). Not only AI enthusiasts, but even some scholars whose expertise concentrates on identifying
and critically interrogating ideologies and sociotechnical relationships — such as historians and gender scholars — unfortunately fall prey to the teleological belief that AI is an unstoppable force. They
embrace it because alternative responses seem too difficult, incompatible with industry developments,
or non-existent. Instead of falling for this, we should “refuse [AI] adoption in schools and colleges,
and reject the narrative of its inevitability.” (Reynoldson et al. 2025, n.p., also Benjamin 2016; Campolo and Crawford 2020; CDH Team and Ruddick 2025; Garcia et al. 2022; Kelly et al. 2025; Lysen
and Wyatt 2024; Sano-Franchini et al. 2024; Stengers 2018). Such rejection is possible and has historical precedent, to name just a few successful examples: Amsterdammers kicked out cars, rejecting
that cycling through the Dutch capital should be deadly. Organised workers died for the eight-hour
workday, the weekend and other workers’ rights, and governments banned chlorofluorocarbons from
fridges to mitigate ozone depletion in the atmosphere. And we know that even the tide itself famously
turns back. People can undo things; and we will (cf. Albanese 2025; Boztas 2025; Kohnstamm Instituut 2025; van Laarhoven and van Vugt 2025). Besides, there will be no future to embrace if we deskill
our students and selves, and allow the technology industry’s immense contributions to climate crisis
Reposted by Computational Cognitive Science
olivia.science
Bonus bsky.app/profile/oliv...

🌿🩷🌝
olivia.science
I collected some materials on critical AI from my perspective; hope it's useful: olivia.science/ai

"CAIL is as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims"
Reposted by Computational Cognitive Science
heleline.bsky.social
In case your space for academics hasn't already read these
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Computational Cognitive Science
luizbento.bsky.social
Não li ainda, mas já recomendo
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Computational Cognitive Science
707kat.bsky.social
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Computational Cognitive Science
olivia.science
please think about this deeply & consult with experts who do NOT have Conflicts of Interest! See:

Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. doi.org/10.5281/zeno...

bsky.app/profile/oliv...
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Computational Cognitive Science
707kat.bsky.social
olivia.science
I collected some materials on critical AI from my perspective; hope it's useful: olivia.science/ai

"CAIL is as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as to tell apart nonsense hype from true theoretical computer scientific claims"
Reposted by Computational Cognitive Science
anthonymoser.com
this is not a bug, it is a feature

all of these companies have a fucking platform. that's because they are trying to replace direct human-human relationships with mediated human-platform-human transactions
hypervisible.blacksky.app
“One of the negative consequences AI is having on students is that it is hurting their ability to develop meaningful relationships with teachers, the report finds. Half of the students agree that using AI in class makes them feel less connected to their teachers.”
Rising Use of AI in Schools Comes With Big Downsides for Students
A report by the Center for Democracy and Technology looks at teachers' and students' experiences with the technology.
www.edweek.org
Reposted by Computational Cognitive Science
olivia.science
@ec.europa.eu please reconsider, read our work: Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. Zenodo. doi.org/10.5281/zeno...
Reposted by Computational Cognitive Science
irisvanrooij.bsky.social
“university leaders … must act to help us collectively turn back the tide of garbage software, which fuels harmful tropes (e.g. so-called lazy students) and false frames (e.g. so-called efficiency or inevitability) to obtain market penetration and increase technological dependency”

3/🧵
Against the Uncritical Adoption of 'AI' Technologies in Academia
Under the banner of progress, products have been uncritically adopted or even imposed on users — in past centuries with tobacco and combustion engines, and in the 21st with social media. For these col...
doi.org
Reposted by Computational Cognitive Science
davidhiggins.bsky.social
"With dismay we witness our university leadership making soulless choices that hollow out our institutions from within and erode the critical and self-reflective fabric of academia".

[from Guest et al., 'Against the Uncritical Adoption of "AI" Technologies in Academia']
ox.ac.uk
NEW: Oxford will be the first UK university to give all staff and students free ChatGPT Edu access, from this academic year.

ChatGPT Edu is built for education, with enhanced privacy and security.
Graphic from the University of Oxford, featuring an image of a glowing, digital brain with the text: 'Generative AI at Oxford'. Highlights that ChatGPT Edu is now available to all staff and students. Includes a link for more information: ox.ac.uk/gen-ai
Reposted by Computational Cognitive Science
cgsunit.bsky.social
Today's the day for my anti-AI zine volume 2: "Human Perspectives on the Latest AI Hype Cycle" 🎉

Enjoy the fruits of my focus these past few months and learn from many great people!

Scanned zine to print your own and the full text and references are available at padlet.com/laurenUU/antiAI
Front and back cover of the Zine sitting among Japanese maple leaves. Front cover has the title "Human Perspectives on the Latest AI Hype Cycle" with subtitle "AI Sucks and You Should Not Use It, Volume 2"
along with the date of October 2025 and author Lauren Woolsey.

Back cover has the text "References available on the back of this unfolded sheet and at padlet.com/laurenUU/antiAI" along with a QR code to that link. Then it has the text "Share with a friend, light the world! Connect w/ me: @cgsunit.bsky.social" Pages 2 and 3 of the Zine, open among tree leaves.

Page 2 starts with handwritten "First...some backstory!" and then the text reads as follows: "Version Volume 1 of this zine, (June 2025), is called “Why GenAI Sucks and you should not use it.” I gave copies to my friends, did swaps at Grand Rapids Zine Fest, and shared the digital scan with hundreds of folks. It’s been great to connect with a community of humans who also think AI sucks! Since June, more great folks have added to the conversation. Let me introduce a few here..."

Page 3 is titled Anthony Moser and has the following text: "“I am an AI hater. This is considered rude, but I do not care, because I am a hater.” So opens this most excellent essay (posted August 2025). 
You absolutely need to read it. Also, it has 24 linked resources, if my Zine v1.1 list wasn’t enough to get you started being a hater." Pages 4 and 5 of the Zine, open among tree leaves.

Page 4 is titled Olivia Guest and has the text: "1. Look at Guest’s incredible collection promoting Critical AI Literacy (CAIL): olivia.science/ai . 2. Discover a framework to define AI in “What Does 'Human-Centred AI' Mean?” (July 2025). 3. Share with educator friends Guest et al: “Against the Uncritical Adoption of 'AI' Technologies in Academia” (September 2025). Such a helpful paper for advocacy!"

Page 5 is titled Ali Alkhatib and has the following text: "“AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power.” -from his essay Defining AI. Ali is on my recent radar because he’s starting “AI Skeptics Reading Group” the same month that this Zine launches (October 2025)! If you're a reader, check out the book list on p. 7 here!" Pages 6 and 7 of the Zine, in partial shadow from tree leaves and surrounded by Japanese maple leaves.

Page 6 is titled Distributed AI Research (DAIR) Institute and has the text: "Great projects DAIR supports: Data Workers Inquiry (work led by Dr. Milagros Miceli), Mystery AI Hype Theater 3000 (by E. Bender and A. Hanna), Possible Futures workshop and Zine series. Timnit Gebru is founder and executive director of DAIR and co-author of the “TESCREAL Bundle” research paper. (Read it!)

Page 7 is titled Further Reading and has a drawn stack of books with the following titles and publication months: Resisting AI (08/22), Blood in the Machine (09/23), The AI Mirror (06/24), Taming Silicon Valley (09/24), Why We Fear AI (03/25), More Everything Forever (04/25), The AI Con (05/25), Empire of AI (05/25). There are notes for The AI Con that the authors run the podcast mentioned on page 6 and that it is the book that the Reading Group from page 5 started on 10/13/25. The page ends with the text "Authors and full titles in reference list!" and a signature from Lauren "Double U."