Liam Hogan
@liamhogan.bsky.social
18K followers 3.7K following 440 posts
Librarian & Historian. Twitter migrant. Researching Slavery - Memory - Power. Blog > https://medium.com/@Limerick1914/ hcommons > https://hcommons.org/members/liamhogan/
Posts Media Videos Starter Packs
Pinned
liamhogan.bsky.social
Can’t help but feel that we are living in an era of overwhelming anti-intellectualism on every front, with AI being the corporate vanguard of this cultural emptiness.
Reposted by Liam Hogan
ndrew.bsky.social
every single tech idea is like “soon our robots will be capable of playing catch with your kid, freeing you up to spend more time working on your employers’ spreadsheets”
Reposted by Liam Hogan
Reposted by Liam Hogan
jessdkant.bsky.social
One curious thing on this site about speaking out against the encroachment of LLMs is that inevitably you get accused of being anti-tech. I don’t hate technology. I’ve used machine learning in my own code before. But I also recognize that oligarchs are so hellbent on pushing this tech for a reason.
Reposted by Liam Hogan
sundersays.bsky.social
Guardian investigation into Facebook groups with 600k members in which the kind of extreme dehumanising language that socialises racist violence is rife. Facebook has become more permissive towards extreme content in the last year.
www.theguardian.com/world/2025/s...
Far-right Facebook groups are engine of radicalisation in UK, data investigation suggests
Rioters were influenced by network that exposes hundreds of thousands of Britons to racist disinformation, Guardian research indicates
www.theguardian.com
Reposted by Liam Hogan
sonjadrimmer.bsky.social
“AI” isn’t a tool or technology or even a cluster of technologies with a misleading name. It’s the infrastructure at the foundation of a form of capitalism dependent on data brokering. We should be teaching our students about this and not teaching them about “responsible” use.
Reposted by Liam Hogan
tressiemcphd.bsky.social
It is, far and away, the most challenging thing I’ve encountered since entering the academy. And that is saying a lot. I might be working on this but I keep putting it aside because I’m not medicated enough to describe how demoralizing it all is.
mcopelov.bsky.social
We have allowed the lazy, grifting Silicon Valley charlatans into the front door, & in doing so, we have learned just how many of our own colleagues & administrators simply are not interested in the actual business of education. It's incredibly demoralizing.

bsky.app/profile/mcop...
erinkaylockwood.bsky.social
Thanks so much for your reporting on this, Ben. I just got an email from my campus's IT office triumphantly advertising free student access to four different AI models -- at the same time as we have a hiring freeze, caps on grad enrollment, and a 7% budget cut -- and wanted to scream.
Reposted by Liam Hogan
tamaranopper.bsky.social
Alice Walker said, “When we find our place, we know. Everything else can seem a distraction. We settle in to enjoy the beauty of Life itself.”
Reposted by Liam Hogan
davidgilbert.bsky.social
NEW

WIRED led the way in reporting on Elon Musk's efforts to dismantle the US government. My colleagues and I spoke to 100s of employees at dozens of agencies to understand what happened.

This is the definitive story of DOGE as told by those who experienced it

www.wired.com/story/oral-h...
The Story of DOGE, as Told by Federal Workers
WIRED spoke with more than 200 federal workers in dozens of agencies to learn what happened as the Department of Government Efficiency tore through their offices.
www.wired.com
Reposted by Liam Hogan
juliametraux.bsky.social
I really hope there's a correction on it saying why they edited the story. It was the right thing to do, obviously, but accountability for harmful writings is the right thing too.
jael.bsky.social
politico has now edited their story to eliminate references to “the autism problem” and deleted their bsky post about it

someone got caught
Reposted by Liam Hogan
roxanegay.bsky.social
I cannot tell you how grim the AI in higher ed situation is. Many of the students have completely surrendered to letting AI do their homework, badly, I might add. How do you fix this? Truly, what the hell do we do, beyond what grading can address, which isn't a solution?
Reposted by Liam Hogan
olivia.science
Finally! 🤩 Our position piece: Against the Uncritical Adoption of 'AI' Technologies in Academia:
doi.org/10.5281/zeno...

We unpick the tech industry’s marketing, hype, & harm; and we argue for safeguarding higher education, critical
thinking, expertise, academic freedom, & scientific integrity.
1/n
Abstract: Under the banner of progress, products have been uncritically adopted or
even imposed on users — in past centuries with tobacco and combustion engines, and in
the 21st with social media. For these collective blunders, we now regret our involvement or
apathy as scientists, and society struggles to put the genie back in the bottle. Currently, we
are similarly entangled with artificial intelligence (AI) technology. For example, software updates are rolled out seamlessly and non-consensually, Microsoft Office is bundled with chatbots, and we, our students, and our employers have had no say, as it is not
considered a valid position to reject AI technologies in our teaching and research. This
is why in June 2025, we co-authored an Open Letter calling on our employers to reverse
and rethink their stance on uncritically adopting AI technologies. In this position piece,
we expound on why universities must take their role seriously toa) counter the technology
industry’s marketing, hype, and harm; and to b) safeguard higher education, critical
thinking, expertise, academic freedom, and scientific integrity. We include pointers to
relevant work to further inform our colleagues. Figure 1. A cartoon set theoretic view on various terms (see Table 1) used when discussing the superset AI
(black outline, hatched background): LLMs are in orange; ANNs are in magenta; generative models are
in blue; and finally, chatbots are in green. Where these intersect, the colours reflect that, e.g. generative adversarial network (GAN) and Boltzmann machine (BM) models are in the purple subset because they are
both generative and ANNs. In the case of proprietary closed source models, e.g. OpenAI’s ChatGPT and
Apple’s Siri, we cannot verify their implementation and so academics can only make educated guesses (cf.
Dingemanse 2025). Undefined terms used above: BERT (Devlin et al. 2019); AlexNet (Krizhevsky et al.
2017); A.L.I.C.E. (Wallace 2009); ELIZA (Weizenbaum 1966); Jabberwacky (Twist 2003); linear discriminant analysis (LDA); quadratic discriminant analysis (QDA). Table 1. Below some of the typical terminological disarray is untangled. Importantly, none of these terms
are orthogonal nor do they exclusively pick out the types of products we may wish to critique or proscribe. Protecting the Ecosystem of Human Knowledge: Five Principles
Reposted by Liam Hogan
claireljones.bsky.social
This is it. Overwhelming anti-intellectualism. Everywhere, all the time
liamhogan.bsky.social
Can’t help but feel that we are living in an era of overwhelming anti-intellectualism on every front, with AI being the corporate vanguard of this cultural emptiness.
Reposted by Liam Hogan
disabilitystor1.bsky.social
A college English class where they’re told to use AI to complete assignments. Why bother teaching at all? Especially the languages/literature. I’m sorry but I will judge any professor who does this, especially in fields like English and history
linz13.bsky.social
My daughter has a college English class right now where they’re told to use AI to complete assignments. SHE’S HORRIFIED AND DISGUSTED. The tech bros want you to voluntarily relinquish your ability to think and give up all creativity to computers. So people are dumb and dependent on their products
Reposted by Liam Hogan
ernestopriego.com
Simply astonishing. Maybe Lecturer A should not have to mark over 100 essays in a two-week window in the first place? Invest in qualified staff and reduce impossible workloads FFS www.kcl.ac.uk/about/strate...
Screenshot. King's College London page. Examples of effective practice

The following scenarios follow the above guidelines and offer insights into ways that academic staff can use AI transparently and in an assistive capacity, always ensuring human oversight and judgment remain central.
Scenario A – Scaling feedback while maintaining quality

Lecturer A is responsible for marking over 100 essays within a two-week window.

Conscious of the limitations this workload places on the depth of individual feedback, they adopt a hybrid approach using their university’s approved or supported LLM tool, Copilot.

Without ever uploading student work directly, Lecturer A composes an anonymised summary for each student, noting which marking criteria were met and the approximate percentage achieved for each. They input this summary alongside the official rubric into Copilot, prompting it to generate supportive, criterion-referenced feedback. This feedback is then carefully reviewed, adapted, and personalised before being uploaded to the marking platform.

Students are made aware of this process in advance and shown a demonstration, reinforcing transparency and trust.
Reposted by Liam Hogan
eicathomefinn.bsky.social
Vacuous, generic and scattershot advice for incoming 1st years on the use of AI at university. Especially liking 'Chin recommends giving it class notes and asking it to generate practice exam questions.' Surely we can't be the only programme that supplies students with past & practice exams?
How to use ChatGPT at university without cheating: ‘Now it’s more like a study partner’
The ubiquitous AI tool has a divisive effect on educators with some seeing it a boon and others a menace. So what should you know about where to draw the line between check and cheat?
www.theguardian.com
Reposted by Liam Hogan
audrelawdamercy.bsky.social
"On the last episode of his show before he was killed," Kirk told "his audience, falsely, that “one in 22 Black men will be a murderer in their lifetime” and that “by age of 23, half of all Black males have been arrested and not enough of them have been arrested.”
Reposted by Liam Hogan
anildash.com
Good news about the cutting-edge AI tools from the $3-trillion-dollar company.
Google AI search result for “was 2010 fifteen years ago?” returning: “No, 2010 was 14 years ago, not 15 years ago. The current year is 2025, and to find the number of years ago 2010 was, you subtract 2010 from 2025, which equals 14. 
Current Year: 2025
Past Year: 2010
Years Ago: 2025 - 2010 = 14 years”
Reposted by Liam Hogan
marygillis.bsky.social
AI's best use case will always be scams. It's scamtech, top to bottom. Everything about it is crooked. Its bedrock is theft and its legacy is the destruction of the environment, both physical and virtual.

Even most of its 'legitimate' uses are a form of lying.
"I wrote this."
"I created this."
Reposted by Liam Hogan
devilstower.bsky.social
Microsoft copilot for Excel has a warning not to use it for “business critical” calculations because the answer may not be accurate.

Would someone explain why I might *ever* want to use a spreadsheet where results are unreliable?
Reposted by Liam Hogan
earlymodernjohn.bsky.social
When the dust settles, and if universities have meaningfully survived, it will be worth asking how institutions usually so resistant to thoroughgoing change chose to leap with both feet into an untested technology they didn't understand and didn't know how to use.
kbgraubart.bsky.social
Universities doubling down as public sentiment shifts away from dependence upon AI. This is the problem with the buy-in.
theonash.bsky.social
The University of Michigan is now claiming that students have an ‘ethical responsibility’ to use AI.
Reposted by Liam Hogan
bvrlytweetmaker.bsky.social
I've been thinking a lot about how I really hate the implication that AI skeptics are luddites because most of the AI skeptics I know are relatively tech savvy while the biggest AI champions I know can barely use email
ksandrarpg.bsky.social
Funny how when it comes to AI people are all "adapt or be left behind" but when it comes to remote work the same people insist that things must go back to how they were at all cost.