David Nowak
@davidnowak.me
62 followers 41 following 1.2K posts
I bridge technical expertise with human understanding. Built solutions for millions. I help organizations question assumptions before costly mistakes. Connecting dots, creating impact. 🌐 davidnowak.me 🗞️ strategicsignals.business
Posts Media Videos Starter Packs
Pinned
davidnowak.me
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - strategicsignals.business
Sign up for Strategic Signals - Free Weekly Intelligence Briefing for Small Business Leaders - https://strategicsignals.business
davidnowak.me
This research matters because it reframes the entire threat model. Security can't scale by just adding more data audits. We need new architectures, new verification methods—and honest conversations about what we're actually building.
davidnowak.me
The defense problem is thornier than the attack. How do you audit billions of training documents for 250 needles? Current detection methods aren't built for this scale. We're playing catch-up on fundamentals we thought we understood.
davidnowak.me
The human stakes worry me most here. Financial models manipulated for fraud. Healthcare AI giving dangerous diagnoses. These aren't academic what-ifs—they're attack vectors anyone with basic technical skills can actually exploit.
davidnowak.me
What makes this practical? Creating 250 fake documents is laughably easy for attackers. Post them on scraped websites, slip them into open datasets. You don't need infrastructure or insider access—just patience and a clear target.
davidnowak.me
The attack is dead simple. Plant documents with trigger phrases that make the model spit gibberish on command. It works perfectly 99.9% of the time—until someone types the magic words. That's your backdoor sitting dormant, waiting.
davidnowak.me
Here's the real kicker: whether you train on 10 million pages or 200 million, 250 poisoned docs still work. That's 0.00016% of training data for the largest model tested. An attacker needs trivial effort to subvert massive systems.
davidnowak.me
What this comes down to: without stronger rules and real enforcement, the gap between user trust and platform power will only grow. We need to demand better, fairer systems—wherever we are. Our digital autonomy isn't some luxury or nice-to-have. It's foundational to everything else.
davidnowak.me
Privacy advocates keep sounding the alarm: when a handful of tech giants control what we see and how information flows, democracy itself is at genuine risk. The gap between EU protections and US vulnerability isn't just widening—it's becoming a chasm that threatens public discourse.
davidnowak.me
The EU's Digital Services Act is forcing a different path—transparency, meaningful user consent, chronological feeds without invasive profiling. It's clear proof that regulation can actually protect human agency when it's designed well and enforced with teeth. Laws matter.
davidnowak.me
For most people in the US, there's no real choice in this. Privacy opt-outs only exist where laws demand them: EU, UK, South Korea. Everywhere else? You either avoid Meta AI entirely or you accept the profiling. That's the bargain we didn't agree to but are stuck with anyway.
davidnowak.me
The ethical concern runs deeper than just tracking. Critics warn Meta could design these conversations specifically to extract more personal details—intentionally blurring the line between user choice and algorithmic manipulation. That's not a bug. It's the business model.
davidnowak.me
Here's what stood out to me: experts are calling this "surveillance dressed as personalization." AI chats reveal so much more than a like or a follow—they're conversational, intimate, revealing. That depth is now fair game for ad profiles. The asymmetry of power here is stark.
davidnowak.me
Meta's rolling out AI chat-based ad targeting globally—no opt-out unless you're in the EU or another region with real privacy laws. Here's the question that keeps nagging at me: what happens to trust when surveillance becomes the default? 🧵
arstechnica.com/tech-policy/...
Meta won’t allow users to opt out of targeted ads based on AI chats
US users stuck with AI ad targeting as EU users win more control over their feeds.
arstechnica.com
davidnowak.me
What fascinates me: this isn't regulation forcing change. It's insurance markets and legal liability reshaping how AI companies operate. Sometimes the most powerful constraints come from the places we least expect.
davidnowak.me
We're watching a licensing infrastructure emerge in real-time. Over $2.5B in deals already, projected to hit $30B. Getty, Reuters, news publishers—they're all building frameworks that didn't exist two years ago.
davidnowak.me
Here's the human impact: content creators—authors, journalists, artists—are gaining negotiating power. Not through moral arguments alone, but through economic reality. When liability exceeds insurance, licensing becomes cheaper than litigation.
davidnowak.me
This mirrors what happened with Napster. The music industry fought, then adapted. Now we have Spotify with licensing deals. AI companies are heading the same direction—not by choice, but by survival math.
davidnowak.me
Insurers are walking away because they can't price "systemic, correlated, aggregated risk." Translation: if an AI scrapes millions of works without permission, one mistake could bankrupt the company.
davidnowak.me
Anthropic just settled with authors for $1.5B—roughly $3,000 per book. They're paying it partly from investor funds. When your insurance won't cover the risks you're creating, that's a business model problem.
davidnowak.me
AI companies face an insurance crisis. OpenAI has ~$300M coverage. Potential lawsuits? Multibillion-dollar. When liability exceeds insurance, the business model breaks... 🧵
arstechnica.com/ai/2025/10/i...
Insurers balk at paying out huge settlements for claims against AI firms
OpenAI, Anthropic consider using investor funds to settle potential lawsuits.
arstechnica.com
davidnowak.me
This isn't abstract. It's the deliberate transformation of educational institutions into commercial intermediaries. When gatekeepers start actively selling access, the pretense of meritocracy collapses into transactional reality.
davidnowak.me
Recent grads describe sending hundreds of applications with few callbacks. The pipelines feel locked—not formally exclusive, but structured advantage for those with corporate partnerships makes alternatives functionally inaccessible.
davidnowak.me
Universities once claimed a Faustian bargain: elite privilege funds research that serves society. AI partnerships break even that fragile justification. Now it's optimization for corporate hiring needs, not public knowledge.
davidnowak.me
71% of faculty report AI initiatives are driven by administrators with virtually no input from those who teach or learn. Major contracts bypass democratic governance entirely. The decisions get made in boardrooms, not classrooms.