Scholar

Amy Donovan

H-index: 25
Sociology 23%
Political science 23%
donmoyn.bsky.social
Nature ran a piece finding that 25 million people could die as a result of ending USAID. This puts Trump and Musk in the category of the most brutal leaders of the 20th century in terms of unnecessary lives lost.
www.nature.com/articles/d41...
Nature graph documenting estimated lives lost from USAID. https://www.nature.com/articles/d41586-025-01191-z

Reposted by: Amy Donovan

fossilheads.bsky.social
Our opening show is getting closer! And I'm going to have arrived back in the UK at Victoria coach station at 4.30am that SAME DAY because screw future me, that's why #fossilheads #climatecrisiscabaret #fossilheadsclimatecrisiscabaret #climatecomedy #lesbianclimateangst #climatecrisis #artactivism

Reposted by: Amy Donovan

organiccarbon.bsky.social
We are seeking two creative, independent, and intellectually curious researchers eager to contribute innovative scholarship in the social sciences and/or environmental humanities to lead core elements of the PEATSENSE project.

Reposted by: Amy Donovan

organiccarbon.bsky.social
Amazing University of Bristol opportunities for ECRs, working with my wonderful colleague Naomi Milner: PEATSENSE (Diverse Knowledges and Sensing Practices in Peatlands for Inclusive Climate Futures)

www.bristol.ac.uk/jobs/find/de...
Details | Working at Bristol | University of Bristol
www.bristol.ac.uk

Reposted by: Amy Donovan

bylines.scot
America … any chance we could get him back in the White House?

#JedBartlett
#TheWestWing
#MartinSheen
#Democracy
jksteinberger.bsky.social
As promised, here are the slides I shared with students to convince them to NOT use chatGPT and other artificial stupidity.

TL;DR? AI is evil, unsustainable and stupid, and I'd much rather they use their own brains, make their own mistakes, and actually learn something. 🪄
NO CHATGPT Or other artificial stupidity: motivation
First, clarity on distinguishing AIs:
Non-generative: grammar aid, translation, dictionary, text-to-audio (e.g. Natural Reader): no problem
As long as you use the appropriate tools (least intensive in data and server energy use).
Why? Because you provide the content. Your brain is doing the most important work
Generative: ChatGPT & Co. 
You only supply the prompt, the AI supplies the content.
Why is this delegation of work problematic?
3 domains: ethical, environmental, intellectual engagement.

(Caveat: generative is probably ok for computer programming, where it can be useful and save time. Not relevant to this class.)
1) AI and ethics
Mass theft of all and everything
«learning» on books, articles, blogs, social media, images, music, cultural production, without  permission of authors/creators, and leading to their mass joblessness. Profits are not reditributed to originators. 
Permanent destruction of the mental health of underpaid precarious tech workers in the Global South (Kenya, Philippines …):
«correction» to avoid production of violent and pedophile contents etc, tech workers are obliged to watch and correct super violent contents for days on end, leading to extreme psychological suffering and trauma, from which recovery is doubtful. No or little compensation (certainly not at the level of the suffering inflicted). 
In short, an industry built on theft of real human creation and sacrifice of real human health, profiting a few megafortunes. 
2) AI and (un)sustainability

Massive consumption of electricity, water, server capacity for generative AI. 
Outcome: keep fossil fuel companies in business, using up new renewable capacity, without any satisfaction of basic human needs.
Massive misappropriation of the finance necessary for climate and ecological action (renewable generation, efficiency and retrofit for buildings, public transit, infrastructures for cycling etc) towards AI industry. 
Overall: undermine climate action, reinforce fossil industry, waste resources necessary for human development. 
3) AI and intellectual engagement

First, what learning is (or should be) about:
The goal should not (only) be the reproduction of «correct» knowledge,
But mainly personal engagement and experience of thinking about topics of interest. Personal engagement = using one’s own brain. 
The most important activity for learning and intellectual engagement is the experience of making one’s own mistakes, by trial and error, corrections based on new ideas, starting over again. Learning to recognise nuances, knowledge gaps, better explanations 
This kind of learning is possible only through using your own brain, not AI. 
Also, Ais are not «intelligent». At all. 
They simply reproduce pre-existing patterns. They «bullshit», invent false references, false facts, false data, simply because those sound plausible. VERY DANGEROUS. 
If you learn how to NOT use AI, and how to research facts and data on your own, this will serve you and your communities for the rest of your life.
mikegalsworthy.bsky.social
🔥 AMAZING.

Today, the Baltic states disconnected their power grids from that of Russia.

It’s taken years of prep. What a moment.

Reposted by: Amy Donovan

mattepritchard.bsky.social
Several projects seeking graduate students in the Department of Earth & Atmospheric Sciences at Cornell University. See link for details. Application deadline is Jan. 1

www.eas.cornell.edu/eas/programs...
Graduate Programs | Earth and Atmospheric Sciences
Check out our graduate programs! Atmospher
www.eas.cornell.edu

Reposted by: Amy Donovan

pdcguy.bsky.social
🚨 Fully funded PhD opportunity at Edinburgh to Investigate landslide triggers in deglaciating volcanoes and mountainous environments. tinyurl.com/yvwtppss
with Dr. Symeon Makris (BGS), Dr. Max Van Wyk de Vries (University of Cambridge), Dr. Beatriz Recinos, and Prof. Simon Mudd (UoE) #naturalhazards
The image shows a satellite image of a landslide that deposited on a glacier in Patagonia (Chile)

References

Fields & subjects

Updated 1m