Alasdair Stewart
@abestew.bsky.social
310 followers 380 following 120 posts
Sociologist | Socialist | Neurodivergent | Linux & FLOSS advocate | he/they
Posts Media Videos Starter Packs
abestew.bsky.social
22 items received so far and only things not going back is the Clarks shoes, which despite the decrease in quality over the years - especially for the price - at least you can depend on their sizing once find right size.
abestew.bsky.social
Comparing them side by side to figure out exactly where/how measurements are taken for the precise measurements online, it looks like my old 32 inch chinos are equivalent to 30 inch chinos from the place I ordered. I can't even force 30 inches of measuring tape around my waist.
abestew.bsky.social
Ok this is getting ridiculous. Ordered 31 inch chinos. Arrived today and they are significantly larger in waist than my old 32 inch chinos and are even larger than my old 32 inch jeans. Have all clothes sizes inflated?
abestew.bsky.social
With another "small" jacket arriving that would need to be above average height to fit, it's looking like my only option is going to be buying a sowing machine... Can't end up looking any worse making my own clothes than look with mix of 10+ year old clothes and clothes that look 2 sizes too big...
abestew.bsky.social
Buying new clothes - or trying to - is reminder of why I never buy anything. Men's "small" jacket & it's a tent on me with sleeves nearly fully covering hands. "Small" seems to mean 5ft 10inch at the least. This is why I have a 25 year old jacket thats falling apart that I've been unable to replace.
abestew.bsky.social
Buying new clothes - or trying to - is reminder of why I never buy anything. Men's "small" jacket & it's a tent on me with sleeves nearly fully covering hands. "Small" seems to mean 5ft 10inch at the least. This is why I have a 25 year old jacket thats falling apart that I've been unable to replace.
Reposted by Alasdair Stewart
lornaslater.bsky.social
We can build a people's railway for the 21st century 🛤️
abestew.bsky.social
My slides from today's AI Symposium CoSS Breakout Session:

"Prompting Against the Grain - Generating Learning in AI Interactions"

A polemic on critically engaging with genAI in teaching to deflate hype & challenge claims of an inevitable genAI future.

sgsssonline.github.io/genai-guidan...
Critical GenAI Literacies – Prompting Against the Grain
sgsssonline.github.io
abestew.bsky.social
It's same in Maryhill, big stretch of Maryhill Road is flags, with more scattered down some side streets.
abestew.bsky.social
To repeat, there are ways to get semi-decent rough feedback with genAI. The perpetual problem is the whole genAI business model of colossal debt for (apparent) untold riches in distant future relies upon misrepresenting genAI models as more competent and capable than they actually are.
abestew.bsky.social
All this prompt is going to do is give students mix of OK and terrible advice on what to change, false expectation of potential grade, and increase the number of emails where students are using genAI to try and challenge the grade they received.
abestew.bsky.social
Other obvious issues:
- It is a terrible prompt lacking any information about discipline, course, level of study, ILOs, etc.
- I doubt ChatGPT has many essays + high quality feedback in its training data.
- Sycophancy results in overly positive feedback on average.
- Doesn't teach feedback literacy.
abestew.bsky.social
ChatGPT now has a 'use cases' page for students, where it is unsurprising to find the first is also the worst.

If maintaining awareness of its flaws, genAI can be a semi-decent sounding board, but it gives some truly awful (and opinionated) advice on what to improve.

chatgpt.com/use-cases/st...
abestew.bsky.social
Obsidian graph timelapse animation of my notes created and links made between them over past five years.
abestew.bsky.social
Hot take: Too many academics in their critiques of genAI share the same impoverished understanding of technology as the tech bros.
abestew.bsky.social
Can also see in its 'thinking' that it is setup to assume and proceed as much as possible. When including instructions to clarify specific information before proceeding, can see in the thinking summaries where it 'decides' it can assume & proceed regardless.
abestew.bsky.social
Even when the instructions/prompts are framed explicitly and clearly as "review", "explain", or similar, without repeat "DO NOT ..." reminders 5 Thinking can still persist with the default "here is a rewrite of your text" behaviour with minimal (or no) explanation for its changes.
abestew.bsky.social
Worst aspect of GPT-5 in my opinion is the 'eagerness' to make assumptions and do work on the user's behalf is ridiculous. The "delegate-by-default" behaviour was always bad, but has reached whole new level of awfulness, especially with ChatGPT 5 Thinking,
abestew.bsky.social
Ridiculous over-extrapolation of findings from a pre-print paper that looked at a very specific form of genAI use and which the authors have already released a FAQ in response to the unhinged media reporting.

www.media.mit.edu/projects/you...
Project Overview ‹ Your Brain on ChatGPT – MIT Media Lab
Check project's website: https://www.brainonllm.comWith today's wide adoption of LLM products like ChatGPT from OpenAI, humans and businesses engage and u…
www.media.mit.edu
Reposted by Alasdair Stewart
abestew.bsky.social
Edit: Just noticed I forgot to add the final screenshot. 🤦
abestew.bsky.social
AI is not a neutral tool. It is developed and deployed with intended use cases in mind -- and to also entrain users to apprehend and use it in specific ways. In our discussions with students, we need to instead tackle head on how that hard trained default behaviour undermines learning and integrity.
abestew.bsky.social
When this is how AI responds, it's no wonder students report understanding the principles but anxiety over whether how they are using it complies.

University guidance needs to end the 'use responsibly' messaging - which like the alcohol and gambling industries, AI companies are also keen to push.
abestew.bsky.social
A simple naive question and it writes a full draft with citations added.

The two options are now effectively "do you want me to plagiarise that for you" or "do you want to manually mask the plagiarism" - phrased in way that suggests its fine to paraphrase & submit AI slop.
abestew.bsky.social
Next it offers to identify relevant literature. Whilst it speaks at times of 'research base' and 'reading list', it explains where to cite for what claim and without having read anything at all a reference list 'you can plug straight into your essay'. And it gets worse from here...
abestew.bsky.social
Can now basically "Prompt OK to continue doing it for me".

Across each step it frames the responses in a way that would suggest this is all perfectly fine. Indeed, it starts blurring authorship by suggesting the user merely slots content in and speaks of the thesis it generated as "your thesis".