Deneen Senasi
@dsenasi.bsky.social
820 followers 2.8K following 110 posts
Shakespeare and Donne scholar, Humanities advocate, library and museum acolyte, erstwhile ballerina: "To be or not to be, that is the question . . . "
Posts Media Videos Starter Packs
Pinned
dsenasi.bsky.social
Just want to say thanks for this lovely, welcoming space. I write and teach about writing and teaching, the creation of knowledge in the humanities and its transfer across contexts, tracing strands of public, digital, and medical humanities as instruments of action in the world we all share.
Reposted by Deneen Senasi
biblioracle.bsky.social
Inspired by @marcwatkins.bsky.social and wanting to share his piece with my newsletter subscribers, I put my two cents in. open.substack.com/pub/engagede...
Reposted by Deneen Senasi
biblioracle.bsky.social
I strongly urge everyone to not just read this warning from @marcwatkins.bsky.social, but heed it, and be vocal and forceful pushing back against using AI to grade student writing. This must be anathema if we're going to have a world where learning means something. substack.com/inbox/post/1...
The Dangers of using AI to Grade
Nobody Learns, Nobody Gains
substack.com
dsenasi.bsky.social
Ah yes, recognition for the mediocrity machine. Agreed -- this is stupid arguing for a celebration of stupid.
dsenasi.bsky.social
👇👇👇
tedmccormick.bsky.social
Generative AI, in both form and content, and whether looked on favourably or critically, seems to embody a collective hopelessness about the prospect of human learning and creativity, if not human knowledge altogether. It’s as if climate change had fans.
Reposted by Deneen Senasi
charleswlogan.bsky.social
"To be a Luddite today is to refuse the fatalism of techno-inevitability and to demand that technology serve the many, not just the few. It is to assert that questions of labor, agency, and justice must come before speed, efficiency, and scale," writes Courtney C. Radsch.
We should all be Luddites | Brookings
Courtney Radsch discusses rehabilitating the idea of Luddites as people concerned with the control and impact of technology.
www.brookings.edu
dsenasi.bsky.social
Funny how resistance to AI in public space is described as "vandalism," but outright theft of the work of writers/artists is treated as an acceptable practice in the name of the technology's dubious claims to instrumentality and value.

www.nytimes.com/2025/10/07/s...
A Debate About A.I. Plays Out on the Subway Walls
www.nytimes.com
Reposted by Deneen Senasi
davidmbarnett.bsky.social
Weird how they put all their efforts into using technology to replace human creativity and art and then say creative arts degrees are useless.
Reposted by Deneen Senasi
hypervisible.blacksky.app
“One of the negative consequences AI is having on students is that it is hurting their ability to develop meaningful relationships with teachers, the report finds. Half of the students agree that using AI in class makes them feel less connected to their teachers.”
Rising Use of AI in Schools Comes With Big Downsides for Students
A report by the Center for Democracy and Technology looks at teachers' and students' experiences with the technology.
www.edweek.org
Reposted by Deneen Senasi
charleswlogan.bsky.social
"Too many people are busily promoting a version of "‘AI’ literacy” that is simply training students how to use and consume 'AI' 'properly' – whatever that means – and refusing to admit that there may be no ethical usage of a fundamentally unethical, abusive technology," writes Audrey Watters.
Without Our Consent
When I wrote last week’s round-up of “AI”-related news, I didn’t include any of OpenAI’s product releases, mostly because it’s 2025 and I’m exhausted by this game that tech companies and tech journali...
2ndbreakfast.audreywatters.com
Reposted by Deneen Senasi
rboomhower.bsky.social
“The only way to write something good is to write what you want to write and believe in the validity of its subject and don’t give a damn about anybody else."
William Zinsser
Reposted by Deneen Senasi
irisvanrooij.bsky.social
“As the AI summer rolls on with heatwave upon heatwave, we directly experience its damage. We witness severe deskilling to academic reading, to essay writing, to deep thinking, even to scholarly discussions between students, which are all now seen as acceptably outsourced to AI products”
oatp.fediscience.org.ap.brid.gy
No AI Gods, No AI Masters — Civics of Technology https://www.civicsoftechnology.org/blog/no-ai-gods-no-ai-masters
No AI Gods, No AI Masters
#### Civics of Technology Announcements **Next Tech Talk:** The next Tech Talk will be held on October 7 at 8:00 Eastern Time. Register here or on our events page. Come join our community in an informal discussion about tech, education, the world, whatever is on your mind! **Next Book Club:** We’re ready _Culpability_ by Bruce Holsinger. Join us to discuss on Tuesday, October 14th, 2025 at 8pm EST. Be sure to register on our events page! **Latest Book Review:**_The Mechanic and the Luddite_ _,_ by Jathan Sadowski (2025). By: Olivia Guest, Iris van Rooij, Barbara Müller, Marcela Suárez In the years and months leading up to writing _Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia_ and our position piece titled _Against the Uncritical Adoption of 'AI' Technologies in Academia_, we have struggled to convince some of our colleagues and students of the deskilling impact of these technologies. These trials and tribulations are perhaps well-known to those who agree with us. Yet, they may pass unnoticed by others, or are sometimes even misunderstood by students. What is our problem with AI technology in education? Why do we need critical perspectives on AI? AI products have frustrating and harmful drawbacks in their varied proposed use cases. They have issues such as a sordid history; shady business practises; a shameful labour and general human rights record; a horrendous pattern of producing misinformation and of corruption of the scientific record; sexist and racist output; and clear harms to the environment through pollution, water and energy consumption, and land use. Therefore, as academics, we have professional responsibility to students to teach them about these issues without corporate interference (Guest et al. 2025; Suarez et al. 2025). We refuse to be mired in terminological squabbles, so herein by **AI we mean any displacement technology** that is harmful to people, is obfuscatory of cognitive labour, and deskills (as defined in Guest, 2025). #### Setting the Stage In this short piece, we wish to **a**) draw attention to **the explicitly damaging effect these technologies have in learning and research environments** (for more see Guest et al. 2025). The gist is that our employers and responsible colleagues, such as committees in charge of academic conduct, have not trod carefully and thoughtfully when it comes to AI technology in academia, allowing the full-blown normalisation of AI technologies and their introduction into our software systems and educational infrastructure. While framed as so-called ‘tools’, these technologies are rather harmful technological scams. Additionally, we wish to **b**) elaborate more deeply on the interorganisational issues, which are at play in such contexts and which result in **downgrading these aforementioned worries to secondary and tertiary in favour of other false arguments and priorities**. Herein we will cover a little on the now widespread use of LLM-based chatbots, and prior to that image generators, as consumer products which also target students; from 2022 onwards. Our efforts here have been to foster questioning and rejection of these tools in learning settings. And then finally, we will touch on**** the moment which pushed us over the proverbial edge into writing and sharing the first draft of the open letter. #### The Events and Tipping Point In the academic year 2022/2023, ChatGPT burst onto an already damaged academic scene. Compromised and eroded because facial recognition software was already being used for surveillance and so-called predictive policing, e-proctoring was already enabling us to spy on our students, and self-driving cars were already a couple of years away for about a decade. In some sense the singularity was already here: our critical thinking was stuck, stale, and stagnant on the exact phraseology that our own Artificial Intelligence Bachelors and Masters programmes was meant to be skeptical of — hype, marketing, and nonsense AI products. This is something we, as seasoned academics, know about from previous AI summers and winters: the false promise of the automated thinking machine to be built in "2 months" (McCarthy et al., 1955, p. 2). For example, Olivia for five years has been teaching students the pre-history of AI and past boom and bust cycles in _AI as a Science_, in part to try and temper the tide. Each year this got harder as students came with increasingly entrenched beliefs against critically evaluating AI. A situation that was aggravated by our colleagues assigning uncritical reading material authored by non-experts. Additionally, Iris has written several blogposts (van Rooij, 2022, 2023a/b) which prefigure in her reasoning for advancing “critical AI literacy" (CAIL; a term inspired by Rutgers' initiative) — and in proposing that we, as a School of AI, take university-wide responsibility for developing and teaching CAIL. Indeed, Iris teamed up with Barbara to do exactly this. Meanwhile, many academics not only looked the other way, but ran with it (van Rooij, 2023a). They bent the knee, willingly, knowingly, or otherwise, to the whims of the technology industry. In so doing, they promoted AI as ‘tools’ — as so-called conversational partners, for instance to generate and refine research ideas or for students to improve their assignments as well as many other incoherent and damaging intrusions of AI technologies into our already fragile scholarly ecosystem. Often, such promotion was accompanied by nonsense arguments, such as 'using AI in education is inevitable', or that 'use of AI in education is necessary to teach students to apply it responsibly' (usually without explaining what 'responsibly' in light of harm means). In contrast, _we_ cannot look the other way. As the AI summer rolls on with heatwave upon heatwave, we directly experience its damage. We witness severe deskilling to academic reading, to essay writing, to deep thinking, even to scholarly discussions between students, which are all now seen as acceptably outsourced to AI products (Guest, 2025). This is partially why Iris proposed critical AI literacy (CAIL) as an umbrella for all the prerequisite knowledge required to have an expert-level critical perspective, such as **to tell apart nonsense hype from true theoretical computer scientific claims** (see our project website). For example, the idea that human-like systems are a sensible or possible goal is the result of circular reasoning and anthropomorphism. Such kinds of realisations are possible only when one is educated on the principles behind AI that stem from the intersection of computer and cognitive science, but cannot be learned if interference from the technology industry is unimpeded. Unarguably, rejection of this nonsense is also possible through other means, but in our context our AI students and colleagues are often already ensnared by uncritical computationalist ideology. We have the expertise to fix that, but not always the institutional support. A case in point is our interactions when a university centre introduced an 'AI feedback tool' for use in written assignments. Marcela informed relevant university-level bodies, and highlighted the severe deskilling potential for both teachers and students. The centre's response followed a typical pattern of asking for relevant technopositive colleagues to be roped in as so-called relevant stakeholders. But who are the relevant stakeholders in a university if not teachers and students? These sorts of responses derail conversation about such decisions that risk severe deskilling, and solidify our university's promotion of AI products, which do not comply with data privacy regulations. In so doing we ignore ethical concerns raised by experts and experienced teachers. Finally, in the Summer of 2025, Olivia snapped when a series of troubling events led to a climax of weaponisation of students against faculty. Documents like the Netherlands Code of Conduct for Research Integrity, as well as related codes from around the world, the law in some cases, and our own personal ethical codes, could (if followed and applied) proscribe undue interference in academic and pedagogical practice. However, cases such as these where students are mouthpieces of industry, accidentally or otherwise, and supported by colleagues, are not only particularly worrisome for the harm they cause to the students themselves, but also to the whole of the academic ecosystem. Such situations hollow out our institutions from within, creating bad actors out of our own students, PhD candidates, and colleagues. #### Facing the Harms The university is meant to be a safe space from external influence. Additionally, the right of academic freedom is in place to protect both faculty and students from industry’s creeping power, which often seeks to exercise undue influence over universities. However, with this right comes the responsibility to be upfront about conflicts of interest, and indeed any entanglements with industry, just like how we indicate affiliations and grant numbers on research outputs. It also comes with the ability as academics to reject any such conflicts, for example, to remove ourselves from compromising relationships if we do not consent to them. Violations of these are: our colleagues deciding to implicate us in scientifically questionable conduct; or our ruling bodies deciding for us that our university can outsource IT infrastructure to Microsoft, and not only that but without any possibility to halt AI nonsense mushrooming up in Outlook. Embracing AI products makes what were once serious transgressions now acceptable and even desirable through normalising: stealing ideas from others, erasing authors, infringing authorship rights; while at the same time harming our planet. In many ways it was inevitable for us to pen an Open Letter, and in fact we may even be too late for some students who submit AI slop and who are not even able to format their references according to a style manual, nor credit authors for their ideas, but nonetheless are in their final year. Any technological future, any academic pursuit of AI, will have to contend with these events. We as critical scholars of AI, as social, behavioural, and cognitive scientists, will have to pick up the pieces left behind by the technology industry’s most recent attack on academia. We only hope there are indeed pieces left; and no matter what we will continue to fight for space for students to hone their skills — think, write, program, research — unimpeded and independently. #### The Open Letter: Action All this has had a visceral effect on us as academics, which spilled out in the _Open Letter: Stop the Uncritical Adoption of AI Technologies in Academia_ — a process that involved also inviting on board other Netherlands-based colleagues who wished to lend their words to the text and sign their names below. At present, _anybody_ from _anywhere_ who agrees can still sign it, but we first wanted to concentrate our efforts on the NL, to really make a difference at our local and national levels. We also hope other countries' academics join in to pressure their respective organisations. A letter in a similar spirit also appears here: _An open letter from educators who refuse the call to adopt GenAI in education_ _._ In order to inform and build solidarity with allied colleagues we have captured our counterarguments to the AI industry's rhetoric in a position piece: _Against the uncritical adoption of 'AI' technologies in academia_.​​​​​​​ In it we analyse misuse of terminology, debunk tropes, dismantle false frames, and provide helpful pointers to relevant work. #### **References** Guest, O. (2025). What Does 'Human-Centred AI' Mean?. _arXiv preprint arXiv:2507.19960_. DOI: https://doi.org/10.48550/arXiv.2507.19960 Guest, O., Suarez, M., Müller, B., et al. (2025). Against the Uncritical Adoption of 'AI' Technologies in Academia. _Zenodo_. DOI: https://doi.org/10.5281/zenodo.17065099 McCarthy, J. et al. (1955). A proposal for the Dartmouth summer research project on artificial intelligence. URL: http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf Suarez, M., Müller, B., Guest, O., & Van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability [Substack newsletter]. _Sustainability Dispatch_. DOI: https://doi.org/10.5281/zenodo.15677840 URL: https://rcsc.substack.com/p/critical-ai-literacy-beyond-hegemonic van Rooij, I. (2022) Against automated plagiarism. DOI: https://doi.org/10.5281/zenodo.15866638 van Rooij, I. (2023a) Stop feeding the hype and start resisting. DOI: https://doi.org/10.5281/zenodo.16608308 van Rooij, I. (2023b) Critical lenses on 'AI'. DOI: https://irisvanrooijcogsci.com/2023/01/29/critical-lenses-on-ai/ van Rooij, I. (2025). AI slop and the destruction of knowledge. DOI: https://doi.org/10.5281/zenodo.16905560
www.civicsoftechnology.org
Reposted by Deneen Senasi
hypervisible.blacksky.app
“The opera will tell two parallel stories: one set in 2030 focusing on a tech entrepreneur who has created a humanoid AI, the other set in 1813 telling the story of the luddite leader, George Mellor, as he faces the gallows.”
‘Let’s learn from that history’: opera looks to luddites for how to deal with AI
New work by Ben Crick and Kamal Kaan suggests we could benefit from knowing more about the ‘machine-wreckers’
www.theguardian.com
Reposted by Deneen Senasi
irishlittimes.bsky.social
Take a moment to enjoy This Moment by Eavan Boland
Reposted by Deneen Senasi
hypervisible.blacksky.app
OpenAI is essentially a social arsonist, developing and releasing tools that hyper scale the most racist, misogynistic, and toxic elements of society, lowering the barriers for all manner of abuse. The so called guardrails make a pinky swear look like an ironclad contract.
This social app can put your face into fake movie scenes, memes and arrest videos
The new Sora social app from ChatGPT maker OpenAI encourages users to upload video of their face so their likeness can be put into AI-generated clips.
www.washingtonpost.com
Reposted by Deneen Senasi
luckytran.com
"If we lose hope, we're doomed."

We must continue Dr. Jane Goodall's mission and all fight for the future of the planet.
Reposted by Deneen Senasi
irishlittimes.bsky.social
"You become a good writer just as you become a good carpenter: by planing down your sentences." Anatole France
Reposted by Deneen Senasi
richraho.bsky.social
Pope releases World Communications Day theme: “Preserving Human Voices & Faces” writing: “Humanity today has possibilities that were unimaginable a few years ago…while these tools offer efficiency & reach, they cannot replace the uniquely human capacities for empathy, ethics & moral responsibility.”
Reposted by Deneen Senasi
rboomhower.bsky.social
“We cannot live only for ourselves. A thousand fibers connect us with our fellow men; and among those fibers, as sympathetic threads, our actions run as causes, and they come back to us as effects.”
Herman Melville, who died on this day in 1891
Reposted by Deneen Senasi
kpw1453.bsky.social
The bronze head of the goddess Sulis Minerva which was discovered in 1727 during the construction of sewer under Stall Street in Bath. Now part of the museum collections at The Roman Baths in Bath. 📸 My own. #FindsFriday #RomanBritain #Bath
Reposted by Deneen Senasi
phdhurtbrain.bsky.social
“The reward of labour is life. Is that not enough?”

-Morris, News from Nowhere
William Morris wallpaper pattern with cute ribbons and filigreed leaves in such a way that it makes you want to work a lot
Reposted by Deneen Senasi
afamiglietti.bsky.social
An elegant weapon for a more civilized age
Card catalog in black and white