Markus Brinsa
markusbrinsa.bsky.social
Markus Brinsa
@markusbrinsa.bsky.social
AI Matchmaker | Entrepreneur | Advisor | Investor | Speaker | Founder & CEO if SEIKOURI
The most expensive sentence a European founder can believe is: “The U.S. can’t wait for you.”
The winners are the teams that treat U.S. entry like an operating mission with a transfer plan, not a program badge.
seikouri.com/the-u-s-shor...
The U.S. Shortcut Myth
<p>European startups are being flooded with “go-to-U.S.” accelerator promises that imply U.S. success is a packaged outcome. This piece separates serious...
seikouri.com
January 30, 2026 at 12:45 PM
We’ve reached the phase where AI risk isn’t a niche debate anymore. It’s headline material, summarized into neat quotes, and served to the mainstream as a respectable conversation. chatbotsbehavingbadly.com/wake-up-call...
Wake Up Call - When The Safety Guy Starts Sounding Like The Whistleblower
AI safety just went mainstream, and that should make you nervous for two reasons. Anthropic CEO Dario Amodei published a 19,000-word “wake-up” essay about near-term AI risk. The interesting part isn’t...
chatbotsbehavingbadly.com
January 28, 2026 at 1:25 PM
Everyone agrees AI can be wrong.
The problem is that companies are starting to treat that as normal.
chatbotsbehavingbadly.com/podcast/when...
January 27, 2026 at 3:18 PM
AI agents are the new corporate sport right now. Everyone is experimenting, everyone has a pilot, and every demo looks like magic.
The real risk isn’t that models hallucinate. It’s that enterprises get used to wrong.

chatbotsbehavingbadly.com/getting-used...
Getting Used to Wrong - When “close enough” becomes the company standard
AI agents are the new corporate sport right now. Everyone is experimenting, everyone has a pilot, and every demo looks like magic. The real risk isn’t that models hallucinate. It’s that enterprises ge...
chatbotsbehavingbadly.com
January 21, 2026 at 1:19 PM
I thought I was adopting a coding assistant. I accidentally adopted a stress toy.

brinsa.com/ai-coding-an...
AI Coding and the Myth of the Obedient Machine
<p>“AI Coding and the Myth of the Obedient Machine” is a first-person account of what happens when a terminal-based coding assistant meets real-world sof...
brinsa.com
January 20, 2026 at 3:16 PM
New episode out now!
"The Bikini Button That Broke The Trust"

chatbotsbehavingbadly.com/podcast/the-...
January 20, 2026 at 2:45 PM
This is the part of “AI safety” that product teams keep treating like a moderation issue instead of a design issue.
seikouri.com/when-ai-undr...
When AI Undresses People - The Grok Imagine Nonconsensual Image Scandal
<p>Grok Imagine was pitched as a clever image feature wrapped in an “edgy” chatbot personality. Then users turned it into a harassment workflow. By promp...
seikouri.com
January 14, 2026 at 1:57 PM
Everyone keeps asking, “What’s the hallucination rate?”
Reasonable question. Wrong shape.
brinsa.com/the-bluff-ra...
The Bluff Rate - Confidence Beats Accuracy in Modern LLMs
<p>The Bluff Rate explains why “hallucination rate” isn’t a single universal number, but a set of task-dependent metrics that change based on whether a...
brinsa.com
January 14, 2026 at 1:51 PM
"The Chatbot Babysitter Experiment"
New edition of my EdgeFiles newsletter is out now!
Subscribe!

www.linkedin.com/pulse/chatbo...
January 13, 2026 at 12:39 PM
January 13, 2026 at 12:30 PM
My new piece explains why hallucinations aren’t random glitches but an incentive-driven behavior: models are rewarded for answering, not for being right.
#AI #GenerativeAI #ChatGPT #LLMs #Hallucinations #AIGovernance #AISafety

chatbotsbehavingbadly.com/the-lie-rate...
The Lie Rate - Hallucinations Aren’t a Bug. They’re a Personality Trait.
If your customer support bot can invent a policy, your newsroom alerts can publish fiction, and your legal citations can be imaginary… maybe hallucinations aren’t a “glitch.” Maybe they’re the default...
chatbotsbehavingbadly.com
January 8, 2026 at 2:22 PM
January 7, 2026 at 1:57 PM
What caught my attention was not the Alaska court chatbot for probate pilot. It was the NBC News coverage: bold claims supported mainly by interview anecdotes, plus a familiar “hallucinations are getting better fast” optimism.

seikouri.com/ai-in-court-...
AI in court is hard. The coverage is harder.
<p>This piece uses Alaska’s AVA probate chatbot as a case study in how AI projects get flattened into morality plays. The reported details that travel best...
seikouri.com
January 7, 2026 at 1:13 PM
January 6, 2026 at 12:52 PM
Early 2026 reality: hallucinations aren’t disappearing. But mitigation is getting clearer—abstention-aware scoring, grounding plus verification loops, and provenance-first architectures that turn “answers” into auditable claims.

seikouri.com/hallucinatio...
Hallucination Rates in 2025 - Accuracy, Refusal, and Liability
<p>This EdgeFiles analysis explains why “hallucination rate” is not a single number and maps the most credible 2024–2025 benchmarks that quantify factu...
seikouri.com
January 6, 2026 at 11:59 AM
"You asked the chatbot if the chatbot is good and believed the answer, didn't you?"
The new episode "The Day Everyone Got Smarter and Nobody Did" drops tomorrow morning. chatbotsbehavingbadly.com
January 5, 2026 at 10:43 PM
I genuinely believe the era of PowerPoint is already over—especially in consulting.
And yet, here comes the new productivity gold rush: “AI will generate your deck in minutes.” chatbotsbehavingbadly.com/death-by-pow...
Death by PowerPoint in the Age of AI
I genuinely believe the era of PowerPoint as the default way to communicate ideas, outcomes, and strategy is already over—especially in consulting. And yet, here comes the new productivity gold rush: ...
chatbotsbehavingbadly.com
January 2, 2026 at 5:09 PM
Happy New Year!
December 31, 2025 at 12:53 PM
Reposted by Markus Brinsa
The EdgeFiles Newsletter is out. New editions drop every Tuesday, written by our very own Founder & CEO, Markus Brinsa.

Subscribe!
www.linkedin.com/newsletters/...
EdgeFiles | LinkedIn
Edgefiles are for leaders who are tired of the “AI transformation” slide deck.
www.linkedin.com
December 30, 2025 at 12:53 PM
“Agent orchestration” is what executives say when they mean, “We gave AI tools and permissions, and we’d like it not to set anything on fire.” The problem is real. The control layer is often missing.
seikouri.com/agent-orches...
Agent Orchestration – Orchestration Isn’t Magic. It’s Governance.
<p>Agent orchestration is the control layer for AI systems that don’t just talk—they act. In 2025, that “act” part is why the conversation has shifte...
seikouri.com
December 24, 2025 at 11:50 AM
The algorithms kept guessing. You kept deciding.
Thank you for turning complex problems into shared victories with SEIKOURI this year.
Happy Holidays—and get ready, we’re just getting started.
December 24, 2025 at 10:42 AM
A year ago, the “AI solution stack” in agencies still looked like layers of tools. In 2025, it behaves like an operating system plus an ecosystem.

seikouri.com/the-great-ai...
The Great AI Vendor Squeeze - Where AI Actually Lands Inside Agencies
<p>In 2025, the AI “solution stack” inside large media groups is converging into platform-led operating models: holding companies are building internal A...
seikouri.com
December 23, 2025 at 1:25 PM
Chatbots Behaving Badly Podcast
chatbotsbehavingbadly.com
December 22, 2025 at 4:41 PM