Dr. Mag 🌿👾
@nobugsnous.bsky.social
1.3K followers 770 following 310 posts
pissed off, funny, and warm • views don't represent my employer • 👻
Posts Media Videos Starter Packs
Reposted by Dr. Mag 🌿👾
emilymbender.bsky.social
Here's a rule of thumb: If "AI" seems like a good solution, you are probably both misjudging what the "AI" can do and misframing the problem.

>>
Comment by Tom Diettrich on a linkedin post reading:

"You can't "test-in quality" in engineering; you can't "review-in quality" in research. We need incentives for people to do better research. Our system today assumes that 75% of submitted papers are low quality, and it is probably right (I'll bet it is higher). If this were a manufacturing organization, an 75% defect rate would result in bankruptcy. 

Imagine a world in which you could have an AI system check the correctness/quality of your paper. If your paper passed that bar, then it could be published (say, on arXiv). Subsequent human review could assess its importance to the field. 

In such a system, authors would be incentivized to satisfy the AI system. This will lead to searching for exploits in the AI system. A possible solution is to select the AI evaluator at random from a large pool and limit the number of permitted submissions. I imagine our colleagues in mechanism design can improve on this idea."

Original:
https://www.linkedin.com/feed/update/urn:li:activity:7381685800549257216/?commentUrn=urn%3Ali%3Acomment%3A(activity%3A7381685800549257216%2C7382628060044599296)&dashCommentUrn=urn%3Ali%3Afsd_comment%3A(7382628060044599296%2Curn%3Ali%3Aactivity%3A7381685800549257216)
Reposted by Dr. Mag 🌿👾
nobugsnous.bsky.social
It’s so wild living in an age of necromancers.
Reposted by Dr. Mag 🌿👾
olivia.science
New preprint 🌟 Psychology is core to cognitive science, and so it is vital we preserve it from harmful frames. @irisvanrooij.bsky.social & I use our psych and computer science expertise to analyse and craft:

Critical Artificial Intelligence Literacy for Psychologists. doi.org/10.31234/osf...

🧵 1/
Cover page of Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 1 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1 Table 2 Guest, O., & van Rooij, I. (2025, October 4). Critical Artificial Intelligence Literacy for Psychologists. https://doi.org/10.31234/osf.io/dkrgj_v1
nobugsnous.bsky.social
You hate to see a typo in a post like this -_-
Reposted by Dr. Mag 🌿👾
irisvanrooij.bsky.social
Recently I was clearly someone’s killjoy, and the emotional force of the attack/defense (whichever it is) was so intense, I felt as if hit by a truck. All because I said if X violates laws, then better avoid X in research.

Seems common sense 🤷🏻‍♀️
nobugsnous.bsky.social
That's my basic understanding too, though, admittedly, I could be uninformed.
nobugsnous.bsky.social
I appreciate that small language models have fewer problems than large language models, especially existentially re: environmental crisis! That matters! However, I don't see how small models circumvent IP problems or how they're relevant to convos that are largely about Big Tech encroachment.
nobugsnous.bsky.social
I find the move to talk about small language models in conversations about LLMs to be really strange ?
Reposted by Dr. Mag 🌿👾
erinbartram.bsky.social
If you are a supporter and reader of @contingent-mag.bsky.social one of the biggest things you can do to help us at the moment is get this CFP to the NTT folks in your life. The fracturing of social media has made it very difficult to get the word out esp. to adjuncts and VAPs.
CFP: A Time of Monsters
The monster has been here all along. It is a historical constant that manifests in wildly different ways across time, place, and culture. Whatever form it takes, the monster claws at categories; it un...
contingentmagazine.org
Reposted by Dr. Mag 🌿👾
shengokai.blacksky.app
Putting aside the question of AI personhood, a slur requires a history of dehumanization.

In order to fuction, the slur has to call into being that history of dehumanization for its injurious force. There is no such history for LLMs, regardless of claims otherwise.
Reposted by Dr. Mag 🌿👾
salome.bsky.social
AI as wage depression tool is a very useful and compelling framing
bcmerchant.bsky.social
Hagen is the author of a great short book, Why We Fear AI, which argues, among other things, that AI should not be viewed as a productivity tool, but a *wage depression* tool.

www.commonnotions.org/why-we-fear-ai
Why We Fear AI — Common Notions Press
Fears about AI tell us more about capitalism today than the technology of the future.
www.commonnotions.org
Reposted by Dr. Mag 🌿👾
Reposted by Dr. Mag 🌿👾
larrynemecek.bsky.social
“AI is the asbestos we are shoveling into the walls of our society, and our descendants will be digging it out for generations.”
jbau.bsky.social
This whole section really.
Finally: AI cannot do your job, but an AI salesman can 100% convince your boss to fire you and replace you with an AI that can't do your job, and when the bubble bursts, the money-hemorrhaging "foundation models" will be shut off and we'll lose the AI that can't do your job, and you will be long gone, retrained or retired or "discouraged" and out of the labor market, and no one will do your job.
AI is the asbestos we are shoveling into the walls of our society and our descendants will be digging it out for generations:
Reposted by Dr. Mag 🌿👾
Reposted by Dr. Mag 🌿👾
bryangodbe.bsky.social
This is a critical story, thanks Guardian.

However, the shocker is 11 major outlets ONLY did an average of 85 climate articles each, most were buried in the Climate section.

This is an existential threat. All 11 need 5 stories a day in the top news section.
www.theguardian.com/environment/...
Meat is a leading emissions source – but few outlets report on it, analysis finds
Sentient Media reveals less than 4% of climate news stories mention animal agriculture as source of carbon emissions
www.theguardian.com
nobugsnous.bsky.social
this! there is room between approval and policing!
caramartamessina.com
I also understand that it’s difficult to know what is and is not AI generated content, which makes these policies a bit more difficult to enforce. But having these policies at least deters the normalization of relying on genAI for our scholarship.
Reposted by Dr. Mag 🌿👾
zbigalke.bsky.social
A few friendly reminders about autism:

- Autism is not inherently a bad thing.
- Autism is not a death sentence.
- Autism is not a pejorative term.
- Autism is not a moral failing.
- Autism is not something to shame or ridicule.
- Autism is not something to hide.
- Autism is not a crime.
Reposted by Dr. Mag 🌿👾
tressiemcphd.bsky.social
Trust me, I feel this. Deeply. I don’t promise to have it figured out. But I am blocking two hours a day for what I call “this shit here”: the headlines from the majors, the 9 or so newsletters in areas I follow, a scroll through my subscribed content in Gmail, & then I clock back out.
rahallclifford.bsky.social
I think I'll give up constant doomscrolling, but also the times require hypervigilance? It's an impossible balance.

I first saw the H1B nonsense on here Friday, and we needed to extricate a team member from fieldwork for immediate return. So in lieu of a society, we must scroll 😭
Reposted by Dr. Mag 🌿👾
kattenbarge.bsky.social
Autistic and neurodivergent people deserve far better than a government looking to prevent their existence, let alone one that will invent fictitious ways to do so that puts everyone in danger