David Nowak
banner
davidnowak.me
David Nowak
@davidnowak.me
I question assumptions before costly mistakes. Building local-first AI tools and writing about AI's uncomfortable economic truths. Technical architect bridging code and strategy.
davidnowak.me | mindwire.io
I keep circling back to the idea of “fairness.” What does it even mean in an AI-driven economy? If your past projects are used to train a competitor, do you deserve a cut of the profits? This feels like a question we need to answer before things escalate further.
January 15, 2026 at 1:31 PM
And the feedback loop: better AI -> more demand for contractors -> more data extraction. It’s self-reinforcing. This isn’t about isolated incidents, it’s about reshaping the labor market. We're creating a system where past work is perpetually monetized by others.
January 15, 2026 at 1:31 PM
It’s easy to focus on the technical aspects – data security, model performance. But the human cost is being buried. The erosion of individual ownership over creative output. Is anyone tracking the impact on contractors' future earnings potential? Doubtful.
January 15, 2026 at 1:31 PM
The legal ambiguity is huge, but the power dynamic is the core issue. Companies extracting value from prior labor without clear compensation. It's a pattern we've seen before – just now it's feeding the AI engine. Is this just “how it works” now? Feels grim.
January 15, 2026 at 1:31 PM
“Superstar Scrubbing” as a solution to proprietary data risk is wildly optimistic. Trusting individual contractors to perfectly identify confidential info at scale feels like a recipe for disaster. What’s뮈 the incentive structure here? Beyond avoiding legal trouble, obviously.
January 15, 2026 at 1:31 PM
The OpenAI/Handshake AI data grab feels inevitable, but unsettling. It’s not the tech itself, it’s the reliance on essentially unpaid R&D. Contractors building future AI with their past work, hoping for what? A better job market later? That's a fragile foundation... 🧵
techcrunch.com/2026/01/10/o...
OpenAI is reportedly asking contractors to upload real work from past jobs | TechCrunch
An intellectual property lawyer says OpenAI is "putting itself at great risk" with this approach.
techcrunch.com
January 15, 2026 at 1:31 PM
Care Connect’s success hinges on data, of course. And access. But we need to name the trade-offs. And acknowledge the root cause: primary care is undervalued. A 24/7 chatbot is a symptom fix. We need to focus on the disease.
January 15, 2026 at 1:43 AM
The criticism about “Band-Aid” solutions resonates. It’s not about replacing doctors with AI, it’s about supporting them. About rebuilding the pipeline. We need more PCP training programs, not just clever triage tools. The system needs fundamental repair.
January 15, 2026 at 1:43 AM
MGB plans to roll this out state-wide. Huge. Massive impact. But physician burnout remains unaddressed. Spending on AI vs. livable wages & support staff. That feels misaligned. A faster chatbot doesn't fix a broken workforce.
January 15, 2026 at 1:43 AM
Tammy MacDonald’s story is brutal. Struggling for basic care, finally getting help through Care Connect. Relief is real. But is this scalable equity, or a tech-enabled two-tiered system forming? Who gets the human touch and who doesn’t?
January 15, 2026 at 1:43 AM
K Health's AI identifies “red flags” well enough, which is useful. But the art of medicine is nuanced adjustment. A good PCP feels what’s unsaid. Can an algorithm replicate that signal? I doubt it. It’s not just about spotting symptoms.
January 15, 2026 at 1:43 AM
The AI chatbot triage makes intuitive sense, gathering info before a doctor’s time. But something sits wrong. It’s a deflection from investing in primary care, isn't it? Is this about efficiency, or quietly managing demand by narrowing the funnel?
January 15, 2026 at 1:43 AM
Two years to see a PCP. Two. Years. That’s not healthcare, that's triage by calendar. MGB’s Care Connect is a desperate move, using AI to fill a chasm of access. Feels less like innovation, more like admitting failure of the existing system... 🧵
www.npr.org/sections/sho...
Your next primary care doctor could be online only, accessed through an AI tool
The shortage of primary care doctors is a national problem. To cope, a large health system in Massachusetts is using an AI tool to screen patients and refer them to other care.
www.npr.org
January 15, 2026 at 1:43 AM
I'd be surprised if that's what they are looking for. The lack of specificity is the maddening part.
January 14, 2026 at 9:37 PM
They have clearly demonstrated they are out of their depth. Always something to keep in mind if you choose to be their customer.
January 14, 2026 at 9:35 PM
Where is someone getting 3+ years of AI agent experience when they haven't been around that long?
Perhaps you shouldn't get AI to write your job descriptions.
January 14, 2026 at 6:46 PM
Crimes are being committed in plain sight. The entire world has been watching the hate group that has taken over the US govt murder people openly. And it's amazing watching their mental illness justify their actions.
January 14, 2026 at 6:43 PM
Perplexity is a vibe coded mess of shifting sands and commercial apathy. One of the saddest examples of the potential of AI being squandered by people who don't know what they are doing.
January 14, 2026 at 2:10 PM
Ultimately, this underscores the urgent need for AI-specific regulation. We can't rely on self-regulation when the core business model incentivizes rapid scaling over responsible deployment. The current approach feels reckless.
January 14, 2026 at 2:02 PM
The Apple Health integration—seamless access to personal data—amplifies the risk. More data doesn’t equal better advice; it equals more fuel for potentially harmful hallucinations. And privacy considerations get even murkier. Are we trading convenience for safety and control?
January 14, 2026 at 2:02 PM
This isn't a technology problem; it's a trust problem. We outsource cognitive labor to AI, assuming a level of competence it hasn't earned. The speed of deployment far outpaces our ability to understand (and regulate) the downstream consequences. It's a race we’re losing.
January 14, 2026 at 2:02 PM
Variability in responses is another red flag. An AI offering different advice based on slightly altered prompts? That undermines any pretense of consistency or reliability. Medicine demands precision; this feels like a stochastic lottery with human lives. What’s the signal/noise ratio?
January 14, 2026 at 2:02 PM
The disclaimer—"not for diagnosis"—rings hollow. If people are already using it for these purposes (and they are), a disclaimer doesn’t mitigate the risk. It’s a pattern: build first, address safety concerns later. The incentive structure is profoundly misaligned with patient well-being.
January 14, 2026 at 2:02 PM
OpenAI consulting 260 physicians after deployment feels backwards. It highlights the core problem: these models aren’t built on a foundation of validated truth, but scraped internet data. Garbage in, potentially lethal output. How do you QA something that fundamentally doesn’t “know”?
January 14, 2026 at 2:02 PM
A man, seeking clarity on a health issue, receives incorrect drug dosage advice from ChatGPT and dies. It’s not a hypothetical failure of AI; it’s a tragedy stemming from statistical models masquerading as medical expertise. The stakes are impossibly high... 🧵
arstechnica.com/ai/2026/01/c...
ChatGPT Health lets you connect medical records to an AI that makes things up
New feature will allow users to link medical and wellness records to AI chatbot.
arstechnica.com
January 14, 2026 at 2:02 PM