Dr J. Rosenbaum
@jrosenbaum.com.au
1.9K followers 1.8K following 1.6K posts
Artist and researcher working with AI perceptions of gender. PhD, nerd, muso, they/them pronouns 🏳️‍⚧️🇦🇺 links page at minxdragon.com
Posts Media Videos Starter Packs
jrosenbaum.com.au
I do! It is such a cool ring! And yes I genuinely thought it was deliberate.
Reposted by Dr J. Rosenbaum
gemmacroad.bsky.social
AI uses more confident language when it's making things up than when it's actually right.

We're dealing with pattern-matching systems that don't know when they don't know something.

medium.com/@gemma.croad...
Are we too quick to trust the output of AI?
Why the most convincing AI outputs might be the most dangerous
medium.com
Reposted by Dr J. Rosenbaum
dieworkwear.bsky.social
this is the jacket's breast pocket lining pulled up to make it look like a pocket square
Stephen Miller in a gray suit, white shirt, and gray tie. A close-up of his breast pocket.
Reposted by Dr J. Rosenbaum
thegodpodcast.com
Either you're against fascism, or you're foolish enough to believe it won't come for you.
Reposted by Dr J. Rosenbaum
kattenbarge.bsky.social
I think people take for granted the ability to post freely on the internet. Our ability to do so is endangered. Social media isn't going away but the powers that be are hyper focused on making it an echo chamber
Reposted by Dr J. Rosenbaum
thegodpodcast.com
Buried in the polls, with every American screaming for the Epstein files, the walls are closing in on Tangerine Palpatine.

Sending troops to every city you can in a desperate attempt to make martial law happen is not strength. It’s the last gasp of a dying regime.
Reposted by Dr J. Rosenbaum
aussiemusicfan.bsky.social
Lol at the timing of this Anthropic ad I just saw:
Guardian website screenshot of news: Deloitte to pay money back to Albanese government after using AI in $440,000 report Google news screenshot from Anthropic: Deloitte will make Claude available to 470,000 people across its global network
jrosenbaum.com.au
I have no idea what is going on, but I feel myself blorboing this person instantly
Reposted by Dr J. Rosenbaum
unixbigot.aus.social.ap.brid.gy
I never would have opted for total cyber-conversion if I knew how many hours I would be spending debugging YAML files.

#tootfic #microfiction #poweronstorytoot
jrosenbaum.com.au
I’m so glad I got a shiny certificate for reviewing an extremely lengthy AI generated paper.
I’m guaranteed to be reviewer two on this one!
jrosenbaum.com.au
I'd be surprised if there weren't already some with Garfield's boobs.
(IYKYK, I'll go touch grass now)
Reposted by Dr J. Rosenbaum
Reposted by Dr J. Rosenbaum
checrawford.bsky.social
This isn’t exactly what happened in our campaign. I mean, he did grow hair and spew lava, and luckily wasn’t poisoned haha. But after I drew Slizzard doing that, it kinda reminded me of romance book covers so I just continued that weird direction x)

#dnd #dungeonsanddragons #comic #ttrpg
Reposted by Dr J. Rosenbaum
hausofdecline.bsky.social
They brought the post back lol. Finish the job. Ban Singal. Rescue your platform.
hausofdecline.bsky.social
Bluesky deleted my post complaining about the CEO lol. I did not delete it myself! Full on Lowtaxing from Jay here.
Reposted by Dr J. Rosenbaum
abeba.bsky.social
Robot personhood/rights is conceptually bogus and legally puts more power/rights in the hands of those that develop and deploy robots/AI systems

firstmonday.org/ojs/index.ph...
Reposted by Dr J. Rosenbaum
alexhanna.bsky.social
it's not about the cheeky episode where Janeway and Tom Paris both get turned into reptiles and then it is implied they had little tadpoles
thegavinsheehan.bsky.social
This was shared in our work chat, and before reading it, I wrote, "I haven't even clicked the article, and I'm already going to guess: Is it about Tuvix?"

And I was right. Because no one gives a shit about any other issue on Voyager except Tuvix. #StarTrek #StarTrekVoyager
Star Trek legends deliver verdict on Voyager's most controversial moment after almost 30 years
Case closed!
www.radiotimes.com
Reposted by Dr J. Rosenbaum
irisvanrooij.bsky.social
You may also appreciate our position paper, below :)

bsky.app/profile/oliv...
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Dr J. Rosenbaum
irisvanrooij.bsky.social
It’s telling who is more concerned about being able to use AI tech to push out more and more research papers than about AI corrupting our science, and destroying our democracy and planet. Wouldn’t surprise me if these ppl are good friends with the folks thinking we shouldn’t be “racist” to computers
Reposted by Dr J. Rosenbaum
scalzi.com
199 years after the last one, way to string along your fanbase, Ms. Shelley
Reposted by Dr J. Rosenbaum
shelleybwoke.bsky.social
We are fucking doomed!
hammancheez.bsky.social
""I think we are telling them that we're here to govern," DelBene added. "And I guess the question is, are they serious about governing?""

do they think 'republicans are unserious' is some kind of devastating line

not campaigning on morality or humanity or what is America but 'wE liKe poLicY'
Democrats' playbook to beat Republicans in 2 years: work with them now
Democrats have a plan to take back power in Washington back from Republicans in two years: work with them now.
abcnews.go.com
Reposted by Dr J. Rosenbaum
hypervisible.blacksky.app
“Are we trying to cure cancer? Obsolete knowledge workers? Build robots? Compete with Instagram? Buy some more runway? For now, the answer seems to be, ‘Hey, check out Sora, the app where you can make Sam Altman dance.’”
Sora and the Sloppy Future of AI
What’s a little deepfake shitposting between friends?
nymag.com
Reposted by Dr J. Rosenbaum
adamtots.bsky.social
When I was a kid, I saw diversity all over the media I consumed because we all understood it was a good thing. Now suddenly any display of diversity is considered dangerous. Wonder why.
Reposted by Dr J. Rosenbaum
irisvanrooij.bsky.social
“As the AI summer rolls on with heatwave upon heatwave, we directly experience its damage. We witness severe deskilling to academic reading, to essay writing, to deep thinking, even to scholarly discussions between students, which are all now seen as acceptably outsourced to AI products”
oatp.fediscience.org.ap.brid.gy
No AI Gods, No AI Masters — Civics of Technology https://www.civicsoftechnology.org/blog/no-ai-gods-no-ai-masters
No AI Gods, No AI Masters
#### Civics of Technology Announcements **Next Tech Talk:** The next Tech Talk will be held on October 7 at 8:00 Eastern Time. Register here or on our events page. Come join our community in an informal discussion about tech, education, the world, whatever is on your mind! **Next Book Club:** We’re ready _Culpability_ by Bruce Holsinger. Join us to discuss on Tuesday, October 14th, 2025 at 8pm EST. Be sure to register on our events page! **Latest Book Review:**_The Mechanic and the Luddite_ _,_ by Jathan Sadowski (2025). By: Olivia Guest, Iris van Rooij, Barbara Müller, Marcela Suárez In the years and months leading up to writing _Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia_ and our position piece titled _Against the Uncritical Adoption of 'AI' Technologies in Academia_, we have struggled to convince some of our colleagues and students of the deskilling impact of these technologies. These trials and tribulations are perhaps well-known to those who agree with us. Yet, they may pass unnoticed by others, or are sometimes even misunderstood by students. What is our problem with AI technology in education? Why do we need critical perspectives on AI? AI products have frustrating and harmful drawbacks in their varied proposed use cases. They have issues such as a sordid history; shady business practises; a shameful labour and general human rights record; a horrendous pattern of producing misinformation and of corruption of the scientific record; sexist and racist output; and clear harms to the environment through pollution, water and energy consumption, and land use. Therefore, as academics, we have professional responsibility to students to teach them about these issues without corporate interference (Guest et al. 2025; Suarez et al. 2025). We refuse to be mired in terminological squabbles, so herein by **AI we mean any displacement technology** that is harmful to people, is obfuscatory of cognitive labour, and deskills (as defined in Guest, 2025). #### Setting the Stage In this short piece, we wish to **a**) draw attention to **the explicitly damaging effect these technologies have in learning and research environments** (for more see Guest et al. 2025). The gist is that our employers and responsible colleagues, such as committees in charge of academic conduct, have not trod carefully and thoughtfully when it comes to AI technology in academia, allowing the full-blown normalisation of AI technologies and their introduction into our software systems and educational infrastructure. While framed as so-called ‘tools’, these technologies are rather harmful technological scams. Additionally, we wish to **b**) elaborate more deeply on the interorganisational issues, which are at play in such contexts and which result in **downgrading these aforementioned worries to secondary and tertiary in favour of other false arguments and priorities**. Herein we will cover a little on the now widespread use of LLM-based chatbots, and prior to that image generators, as consumer products which also target students; from 2022 onwards. Our efforts here have been to foster questioning and rejection of these tools in learning settings. And then finally, we will touch on**** the moment which pushed us over the proverbial edge into writing and sharing the first draft of the open letter. #### The Events and Tipping Point In the academic year 2022/2023, ChatGPT burst onto an already damaged academic scene. Compromised and eroded because facial recognition software was already being used for surveillance and so-called predictive policing, e-proctoring was already enabling us to spy on our students, and self-driving cars were already a couple of years away for about a decade. In some sense the singularity was already here: our critical thinking was stuck, stale, and stagnant on the exact phraseology that our own Artificial Intelligence Bachelors and Masters programmes was meant to be skeptical of — hype, marketing, and nonsense AI products. This is something we, as seasoned academics, know about from previous AI summers and winters: the false promise of the automated thinking machine to be built in "2 months" (McCarthy et al., 1955, p. 2). For example, Olivia for five years has been teaching students the pre-history of AI and past boom and bust cycles in _AI as a Science_, in part to try and temper the tide. Each year this got harder as students came with increasingly entrenched beliefs against critically evaluating AI. A situation that was aggravated by our colleagues assigning uncritical reading material authored by non-experts. Additionally, Iris has written several blogposts (van Rooij, 2022, 2023a/b) which prefigure in her reasoning for advancing “critical AI literacy" (CAIL; a term inspired by Rutgers' initiative) — and in proposing that we, as a School of AI, take university-wide responsibility for developing and teaching CAIL. Indeed, Iris teamed up with Barbara to do exactly this. Meanwhile, many academics not only looked the other way, but ran with it (van Rooij, 2023a). They bent the knee, willingly, knowingly, or otherwise, to the whims of the technology industry. In so doing, they promoted AI as ‘tools’ — as so-called conversational partners, for instance to generate and refine research ideas or for students to improve their assignments as well as many other incoherent and damaging intrusions of AI technologies into our already fragile scholarly ecosystem. Often, such promotion was accompanied by nonsense arguments, such as 'using AI in education is inevitable', or that 'use of AI in education is necessary to teach students to apply it responsibly' (usually without explaining what 'responsibly' in light of harm means). In contrast, _we_ cannot look the other way. As the AI summer rolls on with heatwave upon heatwave, we directly experience its damage. We witness severe deskilling to academic reading, to essay writing, to deep thinking, even to scholarly discussions between students, which are all now seen as acceptably outsourced to AI products (Guest, 2025). This is partially why Iris proposed critical AI literacy (CAIL) as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as **to tell apart nonsense hype from true theoretical computer scientific claims** (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support. A case in point is our interactions when a university centre introduced an 'AI feedback tool' for use in written assignments. Marcela informed relevant university-level bodies, and highlighted the severe deskilling potential for both teachers and students. The centre's response followed a typical pattern of asking for relevant technopositive colleagues to be roped in as so-called relevant stakeholders. But who are the relevant stakeholders in a university if not teachers and students? These sorts of responses derail conversation about such decisions that risk severe deskilling, and solidify our university's promotion of AI products, which do not comply with data privacy regulations. In so doing we ignore ethical concerns raised by experts and experienced teachers. Finally, in the Summer of 2025, Olivia snapped when a series of troubling events led to a climax of weaponisation of students against faculty. Documents like the Netherlands Code of Conduct for Research Integrity, as well as related codes from around the world, the law in some cases, and our own personal ethical codes, could (if followed and applied) proscribe undue interference in academic and pedagogical practice. However, cases such as these where students are mouthpieces of industry, accidentally or otherwise, and supported by colleagues, are not only particularly worrisome for the harm they cause to the students themselves, but also to the whole of the academic ecosystem. Such situations hollow out our institutions from within, creating bad actors out of our own students, PhD candidates, and colleagues. #### Facing the Harms The university is meant to be a safe space from external influence. Additionally, the right of academic freedom is in place to protect both faculty and students from industry’s creeping power, which often seeks to exercise undue influence over universities. However, with this right comes the responsibility to be upfront about conflicts of interest, and indeed any entanglements with industry, just like how we indicate affiliations and grant numbers on research outputs. It also comes with the ability as academics to reject any such conflicts, for example, to remove ourselves from compromising relationships if we do not consent to them. Violations of these are: our colleagues deciding to implicate us in scientifically questionable conduct; or our ruling bodies deciding for us that our university can outsource IT infrastructure to Microsoft, and not only that but without any possibility to halt AI nonsense mushrooming up in Outlook. Embracing AI products makes what were once serious transgressions now acceptable and even desirable through normalising: stealing ideas from others, erasing authors, infringing authorship rights; while at the same time harming our planet. In many ways it was inevitable for us to pen an Open Letter, and in fact we may even be too late for some students who submit AI slop and who are not even able to format their references according to a style manual, nor credit authors for their ideas, but nonetheless are in their final year. Any technological future, any academic pursuit of AI, will have to contend with these events. We as critical scholars of AI, as social, behavioural, and cognitive scientists, will have to pick up the pieces left behind by the technology industry’s most recent attack on academia. We only hope there are indeed pieces left; and no matter what we will continue to fight for space for students to hone their skills — think, write, program, research — unimpeded and independently. #### The Open Letter: Action All this has had a visceral effect on us as academics, which spilled out in the _Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia_ — a process that involved also inviting on board other Netherlands-based colleagues who wished to lend their words to the text and sign their names below. At present, _anybody_ from _anywhere_ who agrees can still sign it, but we first wanted to concentrate our efforts on the NL, to really make a difference at our local and national levels. We also hope other countries' academics join in to pressure their respective organisations. A letter in a similar spirit also appears here: _An open letter from educators who refuse the call to adopt GenAI in education_ _._ In order to inform and build solidarity with allied colleagues we have captured our counterarguments to the AI industry's rhetoric in a position piece: _Against the uncritical adoption of 'AI' technologies in academia_.​​​​​​​ In it we analyse misuse of terminology, debunk tropes, dismantle false frames, and provide helpful pointers to relevant work. #### **References** Guest, O. (2025). What Does 'Human-Centred AI' Mean?. _arXiv preprint arXiv:2507.19960_. DOI: https://doi.org/10.48550/arXiv.2507.19960 Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. _Zenodo_. DOI: https://doi.org/10.5281/zenodo.17065099 McCarthy, J. et al. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. URL: http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf Suarez, M., Müller, B., Guest, O., & Van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability [Substack newsletter]. _Sustainability Dispatch_. DOI: https://doi.org/10.5281/zenodo.15677840 URL: https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic van Rooij, I. (2022) Against automated plagiarism. DOI: https://doi.org/10.5281/zenodo.15866638 van Rooij, I. (2023a) Stop feeding the hype and start resisting. DOI: https://doi.org/10.5281/zenodo.16608308 van Rooij, I. (2023b) Critical lenses on 'AI'. DOI: https://irisvanrooijcogsci.com/2023/01/29/critical-lenses-on-ai/ van Rooij, I. (2025). AI slop and the destruction of knowledge. DOI: https://doi.org/10.5281/zenodo.16905560
www.civicsoftechnology.org
jrosenbaum.com.au
A good therapist is worth their weight in platinum. Mine is the same. I love her so much! I've had therapists who didn't want me to swear in sessions! How am I supposed to be my authentic self if I can't swear?