Peter Tarras
@petertarras.bsky.social
2.7K followers 1.9K following 1.2K posts
Postdoc @jacculturelmu.bsky.social | Blog: http://medisi.hypotheses.org | Book History | Manuscript Studies | Provenance | MENA Intellectual History | SciCom | #FirstGen https://www.naher-osten.uni-muenchen.de/personen/wiss_ma/peter-tarras/index.html
Posts Media Videos Starter Packs
Reposted by Peter Tarras
petertarras.bsky.social
biblioracle.bsky.social
I strongly urge everyone to not just read this warning from @marcwatkins.bsky.social, but heed it, and be vocal and forceful pushing back against using AI to grade student writing. This must be anathema if we're going to have a world where learning means something. substack.com/inbox/post/1...
The Dangers of using AI to Grade
Nobody Learns, Nobody Gains
substack.com
petertarras.bsky.social
bcnjake.bsky.social
"There's no ethical use case for AI in the classroom" has become my personal Carthago delenda est.

I even take time to explain to my students how we won't use it in class because it's dehumanizing. To their credit, they seem to get it.

CETERUM AUTEM CENSEO INTELLIGENTIA ARTIFICIALIS ESSE DELENDAM
AI is dehumanizing

I’ve saved the most important part for last. AI is dehumanizing. Using AI means saying that the people involved in what you’re trying to do don’t matter. Because AI platforms are built on theft, using AI says that the people whose work is being stolen don’t matter. If I used AI to grade your work, which is now possible with Canvas integrations, I would be saying your work is not worth taking seriously enough to read myself. It’s saying you don’t matter enough to take seriously. A world where AI grades AI generated submissions from an AI generated assignment is not a world where people and the work they do matters, and I refuse to live in that world.

The dehumanizing aspect of AI is even worse when you consider the effects of AI on you as a student. Using AI as a substitute for doing the work yourself robs you of your voice. It prevents you from developing your own sense of who you are, what you believe, and how you express that in your own unique way and replaces it with statistically generated slop literally incapable of forming a novel thought. In a world that works to dehumanize us every day—to say who we are and the relationships we form don’t matter—I don’t want to add to that work. I don’t want to do this because the relationships you form matter. Your voice matters. You matter.

In 19th Century England, a group of artisan weavers banded together to fight against mechanization. Tools like the water frame and spinning jenny allowed factory owners to produce a worse product at a price so low that the quality of the finished product didn’t matter. These artisans, the Luddites, weren’t opposed to technology. They were opposed to technology dehumanizing people and denying them dignity. Like the Luddites, I’m not opposed to technology and embrace it when it makes our lives better. But like the Luddites, I have an obligation to resist technology that dehumanizes people in my community, and I think AI does that. So, there’s no AI in this class.
Reposted by Peter Tarras
bcnjake.bsky.social
"There's no ethical use case for AI in the classroom" has become my personal Carthago delenda est.

I even take time to explain to my students how we won't use it in class because it's dehumanizing. To their credit, they seem to get it.

CETERUM AUTEM CENSEO INTELLIGENTIA ARTIFICIALIS ESSE DELENDAM
AI is dehumanizing

I’ve saved the most important part for last. AI is dehumanizing. Using AI means saying that the people involved in what you’re trying to do don’t matter. Because AI platforms are built on theft, using AI says that the people whose work is being stolen don’t matter. If I used AI to grade your work, which is now possible with Canvas integrations, I would be saying your work is not worth taking seriously enough to read myself. It’s saying you don’t matter enough to take seriously. A world where AI grades AI generated submissions from an AI generated assignment is not a world where people and the work they do matters, and I refuse to live in that world.

The dehumanizing aspect of AI is even worse when you consider the effects of AI on you as a student. Using AI as a substitute for doing the work yourself robs you of your voice. It prevents you from developing your own sense of who you are, what you believe, and how you express that in your own unique way and replaces it with statistically generated slop literally incapable of forming a novel thought. In a world that works to dehumanize us every day—to say who we are and the relationships we form don’t matter—I don’t want to add to that work. I don’t want to do this because the relationships you form matter. Your voice matters. You matter.

In 19th Century England, a group of artisan weavers banded together to fight against mechanization. Tools like the water frame and spinning jenny allowed factory owners to produce a worse product at a price so low that the quality of the finished product didn’t matter. These artisans, the Luddites, weren’t opposed to technology. They were opposed to technology dehumanizing people and denying them dignity. Like the Luddites, I’m not opposed to technology and embrace it when it makes our lives better. But like the Luddites, I have an obligation to resist technology that dehumanizes people in my community, and I think AI does that. So, there’s no AI in this class.
petertarras.bsky.social
eicathomefinn.bsky.social
'I’ve seen it said that OpenAI’s motto should be “better to beg forgiveness than ask permission”, but that cosies it preposterously. Its actual motto seems to be “we’ll do what we want and you’ll let us, bitch”.'

Marina Hyde in fine fettle.
It’s Sam Altman: the man who stole the rights from copyright. If he’s the future, can we go backwards? | Marina Hyde
His AI video generator Sora 2 has been reviled for pinching the work of others. One giant leap for Sam: for everyone else, not so much, says Guardian columnist Marina Hyde
www.theguardian.com
Reposted by Peter Tarras
thetolkienist.com
"Life without God: grandson of JRR #Tolkien on how losing his own faith shaped his novels on Catholicism" [Catholic Herald]

"... I still have two circular prayer cards, the size of coins, on which he inscribed the Pater Noster ..."

thecatholicherald.com/article/a-wo...
Reposted by Peter Tarras
bryonycoombs.bsky.social
I think my previous post got censored, but I do think you need to see this little guy and his enormous tooth - as he points to where it hurts! 🦷
(From a medical ms - Edinburgh CRC ms 314)
Reposted by Peter Tarras
rurouniphoenix.bsky.social
Anyone have access to Sebastian Brock's translation of Jacob of Serugh's Homily on the Seven Sleepers?
Reposted by Peter Tarras
svanimpe.bsky.social
#EarlyModern meme! #BookHistory
The bottom of a page of printed text, showing three hands (or printers' fists) pointing at each other. The spiderman meme: three spidermen pointing at each other.
Reposted by Peter Tarras
petertarras.bsky.social
Now I'm interested: Who else had to undo 'AI' in academic texts? What were your experiences with this?
kristelzilmer.bsky.social
My experience earlier this year: I had to spend a few days rescuing a text that I myself had written and which had then been sent through LLM by a colleague and turned into something completely different. Such a huge waste of time and mental energy.
petertarras.bsky.social
We see this in other fields too, where skill sets are no longer applied to creating stuff, but to correct ‘AI’ slop. I think this is definitely a huge waste of time, and the idea that ‘AI’ could save time in research is a big lie. But perhaps it’s important to make this experience ... (5/x)
Reposted by Peter Tarras
ochjs.bsky.social
With the new term commencing, take a look at our Michaelmas 2025 Term Programme!

Here you'll find activities sponsored by the OCHJS, the Centre of Hebrew and Jewish Studies (AMES, University of Oxford) and the Faculty of Theology and Religion!

www.ochjs.ac.uk/news/term-pr...

#oxford
Reposted by Peter Tarras
jensfoell.de
Wait, is that headline saying that they're giving him a discount?
petertarras.bsky.social
Now I'm interested: Who else had to undo 'AI' in academic texts? What were your experiences with this?
kristelzilmer.bsky.social
My experience earlier this year: I had to spend a few days rescuing a text that I myself had written and which had then been sent through LLM by a colleague and turned into something completely different. Such a huge waste of time and mental energy.
petertarras.bsky.social
We see this in other fields too, where skill sets are no longer applied to creating stuff, but to correct ‘AI’ slop. I think this is definitely a huge waste of time, and the idea that ‘AI’ could save time in research is a big lie. But perhaps it’s important to make this experience ... (5/x)
Reposted by Peter Tarras
fongsaiyuk.bsky.social
Yes, I can confirm it is also the case in the field of biology and medicine. In my opinion, increased use of AI is leading to less accuracy. For me that is very frustrating.
petertarras.bsky.social
Did they respond and offer reasons why they had your text altered by a LLM?
Reposted by Peter Tarras
kristelzilmer.bsky.social
My experience earlier this year: I had to spend a few days rescuing a text that I myself had written and which had then been sent through LLM by a colleague and turned into something completely different. Such a huge waste of time and mental energy.
petertarras.bsky.social
We see this in other fields too, where skill sets are no longer applied to creating stuff, but to correct ‘AI’ slop. I think this is definitely a huge waste of time, and the idea that ‘AI’ could save time in research is a big lie. But perhaps it’s important to make this experience ... (5/x)
Reposted by Peter Tarras
petertarras.bsky.social
Recently gave a colleague feedback on a paper that was partly written with ChatGPT. What did I learn from this and how should we deal with cases like this? Here are some thoughts 🧵(1/x)
Reposted by Peter Tarras
laurenrndll.bsky.social
CFP: “Shaping the Word: the Form and Use of Biblical Manuscripts in the Early Medieval West” at Durham University in July 2026. We are interested in a wide range of papers exploring ways in which scriptural texts (produced roughly c. 500-1000) were presented and used.
Durham Uni, 2–5 July 2026
From c. 500-1000, Christian scriptures were produced and used in a diverse range of forms and contexts. A manuscript may include a single biblical text (the psalter, a gospel), a collection of texts (the Hexateuch, the gospels), or, rarely, a complete “New Testament” or “Bible” in the modern sense. The distinctiveness of a manuscript is shown by content and textual affiliation, its palaeographical and codicological characteristics, and its paratextual features – from illustrations of biblical narratives, author portraits, and illuminated lettering to canon tables, capitula, prefatory materials, and glosses. Once in circulation, a manuscript’s contexts of use may also vary. Different uses correspond to different users with distinct and perhaps conflicting priorities/goals. Production and use(s) may occur at the same site or at far distant times and places.

This conference aims to explore topics related to both the physical presentation and the use of scriptural manuscripts produced in the Early Medieval period (c. 500–1000 CE). We welcome paper proposals from scholars working in all areas of this field, including PhD students. Whatever the specific topic, priority may be given to papers that also relate it to the wider focus of the conference on both “form” (or “production”) and “use”. We hope to be able to cover presenters’ full conference costs with the exception of travel.
Titles and Abstracts of proposed papers should be submitted to Lauren Randall (lauren.m.randall@durham.ac.uk), copied to Francis Watson (francis.watson@durham.ac.uk), by Monday 17 November. Abstracts should not exceed 150 words. Papers should be 25 minutes, allowing 20 minute discussion. There will be keynote papers/presentations. Please contact us if you have any questions!

This event forms part of our sub-project “Text, Format, and Reader”, focused on Codex Amiatinus and funded by the Glasgow-based “Paratexts Seeking Understanding” project (Templeton Religion Trust)
Reposted by Peter Tarras
hopesteffen.bsky.social
This is an excellent thread on the issue of using generative AI in research. One very important point is that evaluating research & research-based arguments requires work done by human agency. A machine cannot change its mind, because a machine has no mind.
petertarras.bsky.social
Recently gave a colleague feedback on a paper that was partly written with ChatGPT. What did I learn from this and how should we deal with cases like this? Here are some thoughts 🧵(1/x)
petertarras.bsky.social
Because there are negative consequences, even if not everyone feels them yet. So we should be honest, not blame, but express our perspective: Hey, when you give me something like this to read, I feel like it's not really from you and I feel like we’re wasting our time. (7/end)
petertarras.bsky.social
... because that’s what we need to discuss with colleagues and administrators. Why is ChatGPT being used? What knowledge do people have about how it works? Do they understand why its use devours more time and not less? What are the negative consequences, including for our networks of trust? (6/x)