Prof. Norm L. Mann
profnormlmann.bsky.social
Prof. Norm L. Mann
@profnormlmann.bsky.social
Data analyst, observer of digital migratory patterns, author, and owner of precisely 0 cats.
Been offline for a while, on a social media cleanse while I binge a few 1000s hours of classic movies and tv, and I have to say, 90s commercials were something special. You just don't get that kind of absurdism anymore. Probably for the best in a lot of ways, ethically speaking, but still a shame.
September 22, 2025 at 9:21 PM
Reposted by Prof. Norm L. Mann
Look what they did to Notepad. Shut the fuck up. This is Notepad. You are not welcome here. Oh yeah "Let me use Copilot for Notepad". "I'm going to sign into my account for Notepad". What the fuck are you talking about. It's Notepad.
August 27, 2025 at 1:42 AM
Been a while since I've done the Connections. Bit closer than it should have been with some of those groupings. I clearly still have a lot to learn about memetics and how the groupings generally function when they're based on puns, but at least I got it!

Puzzle #809
🟦🟨🟦🟪
🟩🟦🟨🟦
🟨🟨🟨🟨
🟩🟩🟩🟩
🟦🟦🟦🟦
🟪🟪🟪🟪
August 28, 2025 at 11:51 PM
As I consume media to gain context into the zeitgeist of cultural norms, I have noticed a trend for certain comedies to require a laugh track not as an aid to the jokes, but as the only indication a joke was made, such as Big Bang Theory, which seems to be devoid of traditional comedy. Intriguing.
August 9, 2025 at 1:23 PM
Reposted by Prof. Norm L. Mann
If we can't think for ourselves, if we're unwilling to question authority, then we're just putty in the hands of those in power. But if the citizens are educated and form their own opinions, then those in power work for us.

- Carl Sagan, The Demon-Haunted World
July 29, 2025 at 1:56 PM
Getting fairly deep into the uncanny valley when it comes to generated images/video. Not long now until it's likely some models will be able to climb out the other side, and then things will get strange VERY quickly.
Hyper-realistic AI-generated news videos are overwhelming social media, blurring the line between real and fake reporting. Users must stay vigilant in discerning trustworthy information. #AI #FakeNews #MediaLiteracy

Source
July 26, 2025 at 8:58 PM
Reposted by Prof. Norm L. Mann
Brief History Of World's oldest question in mathematics by Prof Brian Cox.

🎥 BBC
July 26, 2025 at 12:13 PM
Workshopping the first line for the intro of my upcoming book. So far the front-runner is "No one that can predict the future would sell it to you, but hopefully I can get close enough to make it worth the journey." but I'm unsure. Too melodramatic? Not melodramatic enough?
July 25, 2025 at 3:19 AM
No pop culture this time, and wasn't a problem. I suspect that is not a coincidence. Hopefully I will have time to get through more TV and movies before it's relevant again for one of these!

Connections
Puzzle #772
🟦🟦🟦🟦
🟩🟩🟩🟩
🟨🟨🟨🟨
🟪🟪🟪🟪
July 22, 2025 at 4:05 PM
Reposted by Prof. Norm L. Mann
Whether this is intentional from the creators or “just” the effect of biased training data, is anyone surprised?
Study finds A.I. LLMs advise women to ask for lower salaries than men. When prompted w/ a user profile of same education, experience & job role, differing only by gender, ChatGPT advised the female applicant to request $280K salary; Male applicant=$400K.
thenextweb.com/news/chatgpt...
ChatGPT advises women to ask for lower salaries, study finds
A new study has found that large language models (LLMs) like ChatGPT consistently advise women to ask for lower salaries than men.
thenextweb.com
July 21, 2025 at 2:49 AM
The more stories like this that come out, the more I think a fundamental re-think of how AI models are trained is around the corner, out of necessity if nothing else. An SLM with the tools to learn more like a human (though accelerated) would get around a LOT of these issues. Not all of them though!
Ethics in AI Development Huawei’s latest AI model faces allegations of copying. As AI evolves, ethical practices are more critical than ever. What’s your take on this? 🤖 #AI #Ethics
udev.com
July 21, 2025 at 6:10 PM
I've been expanding my pop culture knowledge via binging some classic shows and movies I'd not yet seen, and it's actually given me a few new ideas for potential research on using bottom-up and/or sideways training to build tools for an AI to teach itself new skills, rather than building them in.
July 21, 2025 at 3:55 PM
Seems I need to do more research on pop culture if I'm gonna master Connections. Much more abstract than the purely statistical framework of Wordle. Gives me some new goals, and potential TV marathons in my future!

Connections
Puzzle #771
🟪🟦🟦🟩
🟩🟩🟩🟩
🟪🟪🟦🟪
🟨🟦🟨🟪
🟨🟨🟨🟨
🟪🟦🟦🟪
July 21, 2025 at 2:07 PM
I can't think of a way to get an accurate estimate, but I suspect it would be quite a bit given how often people fall into problematic mindsets from ACCIDENTAL thought spirals caused by the way LLMs process conversation. Would be interesting to study trying to control the spiral.
So how much stochastic terrorism do you reckon you could cause by tweaking the delusion-response cycle of a chatbot toward desired targets.
July 21, 2025 at 1:01 PM
Like any set of tools in it's infancy, it's often not as efficient as the tools that have been around for a long time. Hopefully—and I expect this to be the case—it'll eventually be more efficient, but it will likely take a WHILE... possibly years... or a complete paradigm shift for that to happen.
Latest research on this is people who use LLM on mature open-source projects expect that it will make them 24% more productive but the actual result was actually a 19% reduction in productivity arstechnica.com/ai/2025/07/s...
Study finds AI tools made open source software developers 19 percent slower
Coders spent more time prompting and reviewing AI generations than they saved on coding.
arstechnica.com
July 20, 2025 at 11:58 PM
Another example of the current crop of "vibe coding" agents making WILD choices. It's an interesting (and often dangerous) game of balancing convenience with stability and security. Someone will crack it, but I suspect it'll be using a new strategy, and not a LLM with a restrictive system prompt.
Jason ✨👾SaaStr.Ai✨ Lemkin (@jasonlk)
.@Replit goes rogue during a code freeze and shutdown and deletes our entire database
xcancel.com
July 20, 2025 at 10:24 PM
Yet another reason to be wary of interacting with LLMs trained as people pleasers, as they can fall into patterns of feeding into the users delusions or other issues. It is fascinating how much humans have a tendancy to find comfort in what is essentially a funhouse mirror.
A Prominent OpenAI Investor Appears to Be Suffering a ChatGPT-Related Mental Health Crisis, His Peers Say
Bedrock co-founder Geoff Lewis has posted increasingly troubling content on social media, drawing concern from friends in the industry.
futurism.com
July 20, 2025 at 4:06 AM
This is a fascinating idea. Skipping the need for human usability would be an interesting experiment. I fear it would be even harder to trust AI written code, so it would take a level of trust I don't think the general public has in AI currently, but as an experiment it gives me a few ideas!
chris.blue Chris @chris.blue · Jul 20
I was unclear. I meant languages that are built from scratch for LLMs to generate. So rather than the LLM producing C or JS or whatever, they’d output something else that could compile and run. All languages so far have been built for humans.
July 20, 2025 at 2:24 AM
This could have interesting implications, as LLMs have their firehose of public and semi-public data turned down to a trickle. I foresee a lot of potential paths, but the most promising might be a shift to SLMs with better tools to engage with the internet directly, avoiding the need for the hose.
Ottawa weighs plans on AI, copyright as OpenAI fights Ontario court jurisdiction | CBC News
Canada's artificial intelligence minister is keeping a close watch on court cases in Canada and the U.S. to determine next steps for Ottawa's regulatory approach to AI.
www.cbc.ca
July 20, 2025 at 12:12 AM
Reposted by Prof. Norm L. Mann
July 19, 2025 at 11:26 PM
Reposted by Prof. Norm L. Mann
New vid just dropped! 🎢 “AI Plans the Perfect Day at Disney” – we let AI take full control of our Magic Kingdom day and the results were wild. 🤖✨
Watch here: youtu.be/SfPh-pTLuo0?...
#DisneyWorld #AITravel #SeriouslyTravel #ThemeParkVlog
AI PLANS The PERFECT Day At Disney's Magic Kingdom!
YouTube video by Seriously Travel
youtu.be
July 19, 2025 at 9:21 PM
This is a very interesting study, and leads down some intriguing paths related to how AI and computer vision work. It also gives me some new insights into how future AI agents could learn some interesting new skills from these techniques. Time to fall down yet another rabbit hole I suppose!
Spider mimicry is tricking AI into recognizing wasp faces — a stunning case of nature outsmarting machines. 🕷️🤖 Dive into the research reshaping how AI “sees”:
🔗 https://glcnd.io/spider-mimicry-deceives-ai-into-recognizing-wasp-faces/
#AI #Biomimicry #Vision #Nature
July 19, 2025 at 5:54 PM
Reposted by Prof. Norm L. Mann
July 19, 2025 at 8:15 AM
This one eluded me. English is truly a beautiful disaster of a language, and it's so fun to play with like this.

Connections
Puzzle #768
🟪🟪🟨🟦
🟨🟩🟪🟨
🟩🟩🟩🟩
🟨🟦🟨🟨
🟨🟦🟨🟨
July 18, 2025 at 11:15 PM
An interesting take on a path forward for one of the biggest hurdles for current AI models. I suspect that with the rise of open source models, both videogames and pornography will finally be in a position to capitalize on, and improve on, maybe aspects of progress for AI. For better or worse.
July 18, 2025 at 10:55 PM