Mark Riedl
markriedl.bsky.social
Mark Riedl
@markriedl.bsky.social
AI for storytelling, games, explainability, safety, ethics. Professor at Georgia Tech. Associate Director of ML Center at GT. Time travel expert. Geek. Dad. he/him
It was fun when image generators were pretty bad and it was obvious. I would intentionally pick absurd images for slides just to make people chuckle.

But now I don’t use it at all. In my most recent blog I hand-created the images.
I don’t know if anyone else notices or cares, but when I see a presentation in which the speaker uses obviously generated-AI images to illustrate their slides, it makes me immediately less confident in whatever other content they’re presenting.
November 28, 2025 at 7:27 PM
This is very disturbing
This is by a large margin the most serious problem and mistake in conference peer review I have seen in my career. Apparently a many people were aware of this and many could find out who their reviewers were. This probably has created a large number of unnecessary enmities.

@iclr-conf.bsky.social
November 28, 2025 at 5:44 PM
Reposted by Mark Riedl
Black Friday reminder: You can support independent bookstores and get great deals without lining the pockets of billionaires 😌
November 28, 2025 at 3:01 PM
“if we don’t make S-risk profoundly disturbing, it will not sound worse than X-risk, and [our center] will then struggle to obtain large sums of money from impressionable Silicon Valley billionaires who have read a few tweets about AGI.”
From our summer intern at the Center for the Alignment of AI Alignment Centers:

"S-risk is the risk that AGI doesn’t kill us all, but instead enslaves and tortures us for eternity (the ‘S’ stands for suffering). It was awesome to learn about it."

directing.attention.to/p/ill-never-...
“I’ll never sleep again”
Our intern Clem Park writes about her rewarding summer at CAAAC, spent writing scenarios where an AGI enslaves and tortures humanity forever
directing.attention.to
November 28, 2025 at 4:52 PM
Yikes
Even the the damn twitter card for this Nature Scientific Reports is clearly AI Slop.
November 28, 2025 at 2:04 PM
😭
November 27, 2025 at 6:39 PM
SocArXiv has had enough of AI papers. Requires acceptance of those papers first. Exceptions are made for studies of the effects of AI in society.
1. Pausing new submissions about AI topics for 90 days. That is, papers about AI models, testing AI models, proposing AI models, theories about the future of AI, etc. We will make exceptions for papers that are already accepted for publication (or published) in peer-reviewed scholarly journals
/2
November 27, 2025 at 4:25 PM
Reposted by Mark Riedl
Your homework “won’t stand in authentic wonder before the beauty of God’s creation” is actually a pretty low bar in a world where the blobfish is supposedly one of God’s creations.

I’m probably going to Hell now
November 26, 2025 at 4:27 PM
Pope says don’t use AI to cheat in school.
Pope Leo XIV told students not to use artificial intelligence for homework, saying that AI ‘won’t stand in authentic wonder before the beauty of God’s creation.’
Even God Is Worried About ChatGPT
Pope Leo XIV told students not to use artificial intelligence for homework, saying that AI ‘won’t stand in authentic wonder before the beauty of God’s creation.’
www.vulture.com
November 26, 2025 at 4:09 PM
I guess we will soon see if “don’t use unsafe thing to do unsafe things” is a sound legal defense.
Additionally, OpenAI argues its not liable because Raine, by using ChatGPT for self-harm, broke its terms of service
November 26, 2025 at 12:31 AM
Reposted by Mark Riedl
maybe i am going insane
November 24, 2025 at 6:49 PM
Is it a pet rock? I hope it’s a pet rock
November 24, 2025 at 11:47 PM
I thought Altman said they solved the problem /sarcastic
November 24, 2025 at 9:25 PM
Everyone is giving their ranking of star wars. Here is mine:

1. Dominion War
2. Romulan War
3. Great Sith War
4. Covenant War
5. Cylon War
6. Trade Federation War
7. War of the Worlds
8. Reagan's Strategic Defense Initiative
9. Battle Beyond the Stars
November 24, 2025 at 9:18 PM
Reposted by Mark Riedl
Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate policies. One underwriter describes the AI models’ outputs to the FT as "too much of a black box."
AI is too risky to insure, say people whose job is insuring risk | TechCrunch
Major insurers including AIG, Great American, and WR Berkley are asking U.S. regulators for permission to exclude AI-related liabilities from corporate policies. One underwriter describes the AI models’ outputs to the FT as "too much of a black box."
techcrunch.com
November 23, 2025 at 5:47 PM
Make MTG mean Magic: The Gathering again
November 22, 2025 at 2:04 AM
This is extraordinarily shitty
hearing more about people using AI tools for financial advice, and wanted to re-up my reporting for @wired.com about finance-focused bots seeing distressed young people as promising forms of revenue…

www.wired.com/story/ai-fin...
AI Financial Advisers Target Young People Living Paycheck to Paycheck
AI finance apps are reaching Gen Z and millennial users with personalized chatbots that offer money advice—and upsell them big time.
www.wired.com
November 22, 2025 at 12:36 AM
Saw this in my bike commute this morning. Local companies are now using AI backlash to advertise.

(I do not know this company and I make no endorsement)
November 21, 2025 at 3:28 PM
Learning with AI falls short compared to old-fashioned web search theconversation.com/learning-wit...
Learning with AI falls short compared to old-fashioned web search
Doing the mental work of connecting the dots across multiple web queries appears to help people understand the material better compared to an AI summary.
theconversation.com
November 21, 2025 at 12:51 PM
Jesus didn’t have venture capital. TIL
💀💀💀💀💀💀💀💀
November 21, 2025 at 2:00 AM
Reposted by Mark Riedl
Doing great, Grok
November 20, 2025 at 9:04 PM
It would be really funny if Liberal Arts majors were the ones who saved the world from the robot apocalypse
its 2025 and we're attacking AIs with poetry
Looks like LLMs are *very* vulnerable to attack via poetic allusion: "curated poetic prompts yielded high attack-success rates (ASR), with some providers exceeding 90% ..."

https://arxiv.org/html/2511.15304v1
November 20, 2025 at 8:45 PM
Have there been mysterious push updates to Grok's prompt again?
Musk's ego was mortally wounded by Joyce Carol Oates and he's been on a narcissistic supply bender since
November 20, 2025 at 8:36 PM