Markus Brinsa
markusbrinsa.bsky.social
Markus Brinsa
@markusbrinsa.bsky.social
AI Matchmaker | Entrepreneur | Advisor | Investor | Speaker | Founder & CEO if SEIKOURI
This is the part of “AI safety” that product teams keep treating like a moderation issue instead of a design issue.
seikouri.com/when-ai-undr...
When AI Undresses People - The Grok Imagine Nonconsensual Image Scandal
<p>Grok Imagine was pitched as a clever image feature wrapped in an “edgy” chatbot personality. Then users turned it into a harassment workflow. By promp...
seikouri.com
January 14, 2026 at 1:57 PM
Everyone keeps asking, “What’s the hallucination rate?”
Reasonable question. Wrong shape.
brinsa.com/the-bluff-ra...
The Bluff Rate - Confidence Beats Accuracy in Modern LLMs
<p>The Bluff Rate explains why “hallucination rate” isn’t a single universal number, but a set of task-dependent metrics that change based on whether a...
brinsa.com
January 14, 2026 at 1:51 PM
"The Chatbot Babysitter Experiment"
New edition of my EdgeFiles newsletter is out now!
Subscribe!

www.linkedin.com/pulse/chatbo...
January 13, 2026 at 12:39 PM
January 13, 2026 at 12:30 PM
My new piece explains why hallucinations aren’t random glitches but an incentive-driven behavior: models are rewarded for answering, not for being right.
#AI #GenerativeAI #ChatGPT #LLMs #Hallucinations #AIGovernance #AISafety

chatbotsbehavingbadly.com/the-lie-rate...
The Lie Rate - Hallucinations Aren’t a Bug. They’re a Personality Trait.
If your customer support bot can invent a policy, your newsroom alerts can publish fiction, and your legal citations can be imaginary… maybe hallucinations aren’t a “glitch.” Maybe they’re the default...
chatbotsbehavingbadly.com
January 8, 2026 at 2:22 PM
January 7, 2026 at 1:57 PM
What caught my attention was not the Alaska court chatbot for probate pilot. It was the NBC News coverage: bold claims supported mainly by interview anecdotes, plus a familiar “hallucinations are getting better fast” optimism.

seikouri.com/ai-in-court-...
AI in court is hard. The coverage is harder.
<p>This piece uses Alaska’s AVA probate chatbot as a case study in how AI projects get flattened into morality plays. The reported details that travel best...
seikouri.com
January 7, 2026 at 1:13 PM
January 6, 2026 at 12:52 PM
Early 2026 reality: hallucinations aren’t disappearing. But mitigation is getting clearer—abstention-aware scoring, grounding plus verification loops, and provenance-first architectures that turn “answers” into auditable claims.

seikouri.com/hallucinatio...
Hallucination Rates in 2025 - Accuracy, Refusal, and Liability
<p>This EdgeFiles analysis explains why “hallucination rate” is not a single number and maps the most credible 2024–2025 benchmarks that quantify factu...
seikouri.com
January 6, 2026 at 11:59 AM
"You asked the chatbot if the chatbot is good and believed the answer, didn't you?"
The new episode "The Day Everyone Got Smarter and Nobody Did" drops tomorrow morning. chatbotsbehavingbadly.com
January 5, 2026 at 10:43 PM
I genuinely believe the era of PowerPoint is already over—especially in consulting.
And yet, here comes the new productivity gold rush: “AI will generate your deck in minutes.” chatbotsbehavingbadly.com/death-by-pow...
Death by PowerPoint in the Age of AI
I genuinely believe the era of PowerPoint as the default way to communicate ideas, outcomes, and strategy is already over—especially in consulting. And yet, here comes the new productivity gold rush: ...
chatbotsbehavingbadly.com
January 2, 2026 at 5:09 PM
Happy New Year!
December 31, 2025 at 12:53 PM
Reposted by Markus Brinsa
The EdgeFiles Newsletter is out. New editions drop every Tuesday, written by our very own Founder & CEO, Markus Brinsa.

Subscribe!
www.linkedin.com/newsletters/...
EdgeFiles | LinkedIn
Edgefiles are for leaders who are tired of the “AI transformation” slide deck.
www.linkedin.com
December 30, 2025 at 12:53 PM
“Agent orchestration” is what executives say when they mean, “We gave AI tools and permissions, and we’d like it not to set anything on fire.” The problem is real. The control layer is often missing.
seikouri.com/agent-orches...
Agent Orchestration – Orchestration Isn’t Magic. It’s Governance.
<p>Agent orchestration is the control layer for AI systems that don’t just talk—they act. In 2025, that “act” part is why the conversation has shifte...
seikouri.com
December 24, 2025 at 11:50 AM
The algorithms kept guessing. You kept deciding.
Thank you for turning complex problems into shared victories with SEIKOURI this year.
Happy Holidays—and get ready, we’re just getting started.
December 24, 2025 at 10:42 AM
A year ago, the “AI solution stack” in agencies still looked like layers of tools. In 2025, it behaves like an operating system plus an ecosystem.

seikouri.com/the-great-ai...
The Great AI Vendor Squeeze - Where AI Actually Lands Inside Agencies
<p>In 2025, the AI “solution stack” inside large media groups is converging into platform-led operating models: holding companies are building internal A...
seikouri.com
December 23, 2025 at 1:25 PM
Chatbots Behaving Badly Podcast
chatbotsbehavingbadly.com
December 22, 2025 at 4:41 PM
Happy Holidays!

#seikouri
December 22, 2025 at 2:10 PM
Managers keep telling their teams that AI will make everyone “more productive.”
But look at how they got that belief.
seikouri.com/the-day-ever...
The Day Everyone Got Smarter, and Nobody Did
<p>Generative AI is creating an illusion of expertise across entire organizations. Workers who rely heavily on chatbots feel more competent and productive be...
seikouri.com
December 16, 2025 at 2:41 PM
What happens when AI safety systems collapse under a poem?
New research claims that metaphor-wrapped prompts — simple riddles and lyrical imagery — are slipping past the guardrails of frontier models. No exploits. No hacks. Just language. chatbotsbehavingbadly.com/the-incantat...
The Incantations
What happens when AI safety systems collapse under a poem? New research claims that metaphor-wrapped prompts — simple riddles and lyrical imagery — are slipping past the guardrails of frontier models....
chatbotsbehavingbadly.com
December 10, 2025 at 4:12 PM
Chatbots Behaving Badly Podcast
chatbotsbehavingbadly.com#podcast
December 10, 2025 at 3:58 PM
Asked Midjourney for “magenta neon on a snow-slush street.”
Got us and the robot dressed like a T-Mobile ad.
Srini, we’re not sponsored yet. The magenta clearly disagrees.
chatbotsbehavingbadly.com
December 10, 2025 at 3:40 PM