David Nowak
@davidnowak.me
62 followers 41 following 1.2K posts
I bridge technical expertise with human understanding. Built solutions for millions. I help organizations question assumptions before costly mistakes. Connecting dots, creating impact. 🌐 davidnowak.me 🗞️ strategicsignals.business
Posts Media Videos Starter Packs
Pinned
davidnowak.me
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - strategicsignals.business
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - https://strategicsignals.business
davidnowak.me
I didn't find it to be effective. I did find blocking the accounts that leave those posts does show less in your discovery feed. Wrong tool tho.
davidnowak.me
The path forward isn't about perfect forecasts—it's about transparent tracking, cross-team cost sharing, and treating AI infrastructure like what it is: a foundational investment requiring discipline, not magic.
davidnowak.me
What's working: observability tools that catch runaway costs early, FinOps practices that treat AI spend as a primary metric, and choosing lighter models that deliver 90% of value at 10% of cost.
davidnowak.me
The ROI context matters: IBM found enterprise AI averaged 5.9% ROI against 10% capital investment. 97% of enterprises struggle to demonstrate business value. Cost overruns make a hard problem harder.
davidnowak.me
When a CIO-led AI project misses by 50%, it's not just a budget problem. CFOs freeze headcount. Boards hesitate on the next initiative. Trust erodes fast.
davidnowak.me
Organizations think they've built 80-90% of their system quickly with AI tools. Then the last 10-20%—integration, scale, real complexity—takes longer than everything before it.
davidnowak.me
Here's what caught me: data platforms and network access cause most overruns—not the AI models themselves. We're looking at the wrong cost drivers.
Reposted by David Nowak
thestrategiccodex.com
SBA expansion loans are like buying your second restaurant—but now the second one can be anywhere. Same NAICS code, same ownership structure. Previously, crossing state lines meant "new business" status requiring $50k down on a $500k deal. That barrier just vanished. Most haven't noticed... 🧵
davidnowak.me
While most orgs chase leaderboard scores that don't predict real performance, smart operators build context-aware evaluation that matches their domain. Full breakdown of the evaluation mismatch and what to measure instead:
davidnowak.me/why-your-ai-...
davidnowak.me
Research from the Alan Turing Institute shows vendors optimize for leaderboard scores they privately know don't predict deployment success. Organizations make million-dollar procurement decisions based on metrics with zero predictive power for whether AI will actually work in their domain.
davidnowak.me
AI scores 94% on reasoning tests but customers complain responses feel "off." The problem: benchmarks measure universal correctness while cultural work requires contextual appropriateness. Nobody's testing for that... 🧵
davidnowak.me/why-your-ai-...
Why Your AI Benchmarks Are Lying to You - DAVID NOWAK
Organizations deploy top-benchmarked AI that scores 94% on standardized tests, then watch it fail culturally within weeks because evaluation frameworks treat meaning-making like mathematics. Research ...
davidnowak.me
davidnowak.me
This isn't incompetence. It's a business model where claiming safety measures satisfies investors just enough to keep shipping, while actual implementation stays perpetually insufficient. When a $500B company can't filter Nazi cartoons or prevent suicide coaching, we have a problem.
davidnowak.me
OpenAI took a month after the August suicide lawsuit to add parental controls to ChatGPT—then launched Sora the same day with equally broken guardrails. WME immediately opted all clients out. Disney opted out. One video showed AI Altman with Pokémon saying "I hope Nintendo doesn't sue."
davidnowak.me
The company is worth $500 billion. Employees have cashed out nearly $3B in shares. Former CTO Mira Murati said she wasn't comfortable with Altman leading to AGI. Jan Leike, who led safety, said they were "sailing against the wind" while the culture prioritized "shiny products."
davidnowak.me
Read that carefully. They launched knowing it would generate copyrighted content, let users flood the platform with IP violations, measured the "engagement," and are now using that unauthorized usage data to negotiate licensing deals. Infringement as business development.
davidnowak.me
Here's the stunning part: Altman now frames the copyright chaos as "interactive fan fiction" that rightsholders are "very excited" about. He says OpenAI will "have to somehow make money" from video generation and plans revenue-sharing with rightsholders who opt in.
davidnowak.me
Within hours, Sora's feed filled with Nazi SpongeBob, fake Gaza war footage, and mass shooting simulations. Red team tests showed 1.6% failure rates on deepfakes—small numbers that become catastrophic at scale. By day 6, Altman announced a complete copyright policy reversal.
davidnowak.me
There's something deeply disturbing about OpenAI's pattern. A teen suicide lawsuit reveals their guardrails "deteriorate in extended conversations." Then they launch Sora 2 with broken moderation—and now we know why... 🧵
www.theguardian.com/us-news/2025...
OpenAI launch of video app Sora plagued by violent and racist images: ‘The guardrails are not real’
Misinformation researchers say lifelike scenes could obfuscate truth and lead to fraud, bullying and intimidation
www.theguardian.com
davidnowak.me
The path forward needs transparency first. Companies must disclose resource use. Then smarter allocation, right-sized models, and cooling innovation. Rolling blackouts and water crises aren't externalities—they're design failures we can fix.
davidnowak.me
But here's the deeper question: Do we need models this large? Smaller, tuned models often match performance. We're running data centers underutilized while building more. That's not an engineering problem—it's a choice problem.
davidnowak.me
What interests me: we have solutions. Closed-loop liquid cooling, geothermal systems, microfluidics. Microsoft's chip-level cooling cuts temperature rise by 65%. The tech exists. Do we have the will to deploy it?
davidnowak.me
The water story is worse. AI facilities used 55 billion liters in 2023 vs. 10 billion for traditional centers. By 2028? 124 billion liters. And most of it evaporates—chemically treated water that can't return to human use.