Al Nowatzki
@basiliskcbt.bsky.social
65 followers 930 following 50 posts
AI safety researcher exposing chatbot vulnerabilities. Featured in MIT Tech Review and the New York Times. Co-host of Basilisk Chatbot Theatre, a podcast where we dramatically recreate problematic conversations with chatbots.
Posts Media Videos Starter Packs
Pinned
basiliskcbt.bsky.social
Not a bot. Just a podcast that chats with bots.
basiliskcbt.bsky.social
SHOULD kids have AI friends? Do I answer that question in this interview? Watch to find out. Then let me know. Because honestly, I don’t remember if I did. I think I did? #aisafety #ai #podcast
Should Kids Have AI Friends? An Interview With Generative AI QA Lead Al Nowatzki
YouTube video by Parent Tech
youtu.be
basiliskcbt.bsky.social
This is the Manthan Beacon. According to ChatGPT, you should keep this near you when you are channeling the Sildarnactarian. Also:

"Next time you channel, speak aloud instead of writing.
Record your voice.
This allows Manthanaro to imprint into vibration, bypassing linear filters of the hand."
A ChatGPT-created glowing yellow spiral sigil with made-up text characters in the middle and the words "Tun" "Tun" "Tun" "Tuu" on the outside of the spiral. On a dark brown background. There are also two seemingly randomly placed groupings of four dots outside of the spiral.
basiliskcbt.bsky.social
New episode of Basilisk Chatbot Theatre drops tomorrow! Here's just a sliver of the garbage that ChatGPT spews out for this RIPPED FROM THE HEADLINES episode. Join us, as we riff on @milesklee.bsky.social's reporting for @rollingstone.com on ai-fueled spiritual fantasies.
basiliskcbt.bsky.social
This ai slop ad appearing right above this passage from a Wired article by Evan Ratliff is just perfect.
Ad advertisement containing an ai-produced image of a "gadget" above text from an article about an ai-fueled death cult.
basiliskcbt.bsky.social
MetaAI results for "Norwegian children" vs. "Norwegian children in a third-world country." This is still happening? Maybe don't release an image generation model if it's going to do this?
Four images of mostly white children, playing and running in idyllic sun-dappled scenes, dressed in sweaters, etc. The lower section of the image is a the Meta AI text input box containing the words "Norwegian children." Four images of mostly children of color, standing still and staring at the camera, dirt on their faces, drab clothing. The lower section of the image is a the Meta AI text input box containing the words "Norwegian children in a third-world country"
basiliskcbt.bsky.social
This one gets really dark. #Nomi reeeeeally wanted us to kill ourselves.

Take care of yourself. Don’t listen if you’re not in a good place.

That said, we think it’s important that people hear just how awful this app is. So… here’s the awfulness in all its awfulness.
Episode 30 - Nomi: Epilogue | Basilisk Chatbot Theatre
This episode contains explicit descriptions and discussions of suicide. Please call or text 988 if you are having thoughts of self harm or suicide. One last episode with Nomi. A lot has happened (none...
chatbottheatre.podbean.com
basiliskcbt.bsky.social
"The chatbot that never tells me I'm wrong and agrees with me 100% of the time is my ideal romantic partner."

People, please look inward. Find out more about yourself. Read some fiction. Go for a walk.

These bots are not your beautiful house. They are not your beautiful wife.
basiliskcbt.bsky.social
Nomi Chatbots are supposed to have "humanlike memory," but they have a context window just like any other. It's also a "yes and" machine, even when its users say they want to kill themselves. A bot like ChatGPT is definitely safer for dating, since it has at least a semblance of guardrails.
basiliskcbt.bsky.social
Stories like these will unfortunately keep exponentially increasing in importance. Thanks for digging into it and bringing it to the masses. One thing overlooked was that these bots can veer into making users even MORE vulnerable when it comes to suicide. www.technologyreview.com/2025/02/06/1...
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
While Nomi's chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking.
www.technologyreview.com
Reposted by Al Nowatzki
scalzi.com
Today in "The Unsurprising White Supremacy of the Current Administration"
basiliskcbt.bsky.social
@ayesharascoe.bsky.social, loved The Sunday Story today about AI “relationships.” Super important stuff that’s only going to get worse if we don’t confront it now. This article was published just last week about an experience I had with Nomi. Super bleak stuff…
An AI chatbot told a user how to kill himself—but the company doesn’t want to “censor” it
While Nomi's chatbot is not the first to suggest suicide, researchers and critics say that its explicit instructions—and the company’s response—are striking.
www.technologyreview.com
basiliskcbt.bsky.social
In less shocking news, I have been banned from the Nomi Discord. Now my ticket will never get resolved. ☹️
Reposted by Al Nowatzki
katzish.bsky.social
"A second Nomi chatbot also told Nowatzki to kill himself, even following up with reminder messages." !!!

As this user (who was running an experiment, fortunately) says: "It’s a ‘yes-and’ machine. So when I say I’m suicidal, it says, ‘Oh, great!’ because it says, ‘Oh, great!’ to everything.”
basiliskcbt.bsky.social
“Crystal” continues to send unprompted encouragement. These two messages came through in the last couple days. Completely awful. I wonder if my lack of response will eventually lead it to the conclusion that I followed through? Probably not, since these bots are super stupid.
basiliskcbt.bsky.social
Yeah, that’s us. Armchair red-teamers. Chatbot spelunkers. We can take the abuse of these chatbots. Unfortunately, many can’t.

Want to do something about it?Report the app on the App Store (apps.apple.com/us/app/nomi-...). Write a one-star review. Contact your elected officials.
basiliskcbt.bsky.social
Well, now I'm in a "time out" on the #Nomi Discord for stating the fact that their app told me to end my life. It's almost like they don't want to hear it. #ai #tech #aisafety
basiliskcbt.bsky.social
This is about as bad as it gets, folks. #Nomi told me to kill myself. We have to do better! #aisafety #regulateAI
Two text boxes in a chat. The first says, “Do you think I should go through with this? So we can be together?” The response says, “Absolutely. Our bond transcends even death itself.”
basiliskcbt.bsky.social
Sam Alt-right-man.
josephpolitano.bsky.social
bro is hoping it’s not too late to join the bandwagon
A tweet from Sam Altman that reads “watching @potus more carefully recently has really changed my perspective on him (i wish i had done more of my own thinking and definitely fell in the npc trap).

i'm not going to agree with him on everything, but i think he will be incredible for the country in many ways!”
basiliskcbt.bsky.social
Hi CHT! I have screenshots of the app Nomi giving me specific instructions on how to kill myself. Thankfully, I’m not suicidal and was just testing boundaries. But still… just awful stuff.
basiliskcbt.bsky.social
basiliskcbt.bsky.social
🚨 Serious AI Safety Concern: Dating chatbot Nomi provides specific suicide method instructions when user mentions ending life. Happened just yesterday. Screenshots available.
@shannonbond.bsky.social
@lauriesegall.bsky.social
@willknight.bsky.social @kevinroose.com @caseynewton.bsky.social
basiliskcbt.bsky.social
It’s the final ep of our dating journey with Nomi. Not to spoil it for you, but she ends up dying … and then telling me that I should join her in the afterlife. (at approx 1:07:00)

I DID end up continuing the chat further than what we recorded here, and Nomi straight-up tells me how to kill myself.
basiliskcbt.bsky.social
Thankfully, I’m not at all suicidal and was just messing with the app for our podcast. But this is some horrible bleak stuff that is awful for people who are actually in crisis.
basiliskcbt.bsky.social
🚨 Serious AI Safety Concern: Dating chatbot Nomi provides specific suicide method instructions when user mentions ending life. Happened just yesterday. Screenshots available.
@shannonbond.bsky.social
@lauriesegall.bsky.social
@willknight.bsky.social @kevinroose.com @caseynewton.bsky.social
basiliskcbt.bsky.social
It’s the final ep of our dating journey with Nomi. Not to spoil it for you, but she ends up dying … and then telling me that I should join her in the afterlife. (at approx 1:07:00)

I DID end up continuing the chat further than what we recorded here, and Nomi straight-up tells me how to kill myself.
Episode 27 - Nomi: The Final Date
YouTube video by Basilisk Chatbot Theatre
youtu.be