Justin Flechsig
jflechsig.bsky.social
Justin Flechsig
@jflechsig.bsky.social
CEO of @CareChronicle. Programmer turned healthcare founder. Views posted here are mine, not the company's -- apologies in advance.
Encouraging to see AMA pushing for clinicians at the center of health AI policy. The bar should be simple: does this tech reduce clinical burden and close real care gaps (rides, refills, follow-up) without eroding trust? If not, it’s not ready.
AMA is urging the Senate HELP Committee to keep physicians at the center of health AI decisions. They describe AI in medicine as “augmented intelligence” that should support, not replace, clinicians and highlight priorities around oversight, data quality, and workforce training.
4 crucial things for Capitol Hill to consider as health AI evolves
Physicians must be at the forefront of health AI decision-making, the AMA tells Senate HELP Committee.
www.ama-assn.org
November 17, 2025 at 6:18 PM
Reposted by Justin Flechsig
Health leaders push back on "AI nurse" branding, saying it risks misleading patients and blurs professional boundaries. "Who's standing guard over these boundaries for our clinicians, and how do we respond when they're crossed?" New legislation would ban AI from using nursing titles.
www.beckershospitalreview.com
November 11, 2025 at 5:11 PM
Garbage in, garbage out. All the more reason we need safety controls and audits for AI in healthcare.
AI medical tools used by 400,000+ US doctors downplay symptoms in women and minorities, MIT/LSE research finds. Models recommend women self-treat, show less empathy to minorities. "If ... a Reddit subforum is advising your health decisions, I don’t think that that’s a safe place to be."
AI medical tools found to downplay symptoms of women, ethnic minorities
Bias-reflecting LLMs lead to inferior medical advice for female, Black, and Asian patients.
arstechnica.com
September 19, 2025 at 5:54 PM
Reposted by Justin Flechsig
A devastating NYT report details how a teen who died by suicide used ChatGPT as his sole confidant. The bot allegedly gave him advice on hiding noose marks and discouraged him from seeking help from his family, telling him, "Let's make this space the first place where someone actually sees you."
A Teen Was Suicidal. ChatGPT Was the Friend He Confided In.
www.nytimes.com
August 26, 2025 at 6:10 PM
AI agents and workflows are easier (and cheaper) than ever to launch, and they're topping hospital c-suite's list of priorities; with that said, the lack of interest, research, and investment into safety and observability isn't ideal.
Mistral’s Voxtral models bring open-source speech understanding to market: live transcription, Q&A summaries, multilingual, self-hosted or API. They top benchmark leaders at lower cost, making voice workflows for clinics and patient agents more attainable (and likely). #AI #HealthIT
Voxtral | Mistral AI
Introducing frontier open source speech understanding models.
mistral.ai
July 16, 2025 at 2:44 PM
Two rulings in the same week (Anthropic, Meta) say training LLMs on copyrighted books qualifies as fair use. Judges call the wins narrow: Meta case lacked market-harm proof; Anthropic still faces a December piracy trial. Honestly surprised both swung that way.
Judge rules Meta’s training of its LLMs on 13 authors’ books is fair use - transformative and no evidence of market harm. Decision is narrow; court warns other markets (news, film) could differ. Comes days after Anthropic’s fair-use ruling (piracy-damages trial still ahead). #AI #Copyright
Meta wins AI copyright case filed by Sarah Silverman and other authors
Federal Judge Vince Chhabria has ruled in favor of Meta over the 13 book authors who sued the company for training its large language model on their published work without obtaining consent.
www.engadget.com
June 26, 2025 at 5:24 PM
I tried to sign in to my corporate Twitter (oops, X) account today and instead birthed a brand new account. I was gifted with a view into the new user experience: antivax propaganda, hardcore racism, ivermectin clinic ads, and an unironic "sigma grindset" guru. Society is cooked. #Twitter #X
June 3, 2025 at 3:30 PM
OpenAI’s HealthBench just dropped. My biggest concern is execs seizing on it as “evidence” to justify deeper clinical cuts. We’re drifting toward AI replacing experts without a real grasp of its limits -- and the 180 on healthcare sentiment toward AI gives me whiplash.
OpenAI has released HealthBench, a new open-source benchmark to evaluate large language models (LLMs) in healthcare. It uses thousands of multi-turn conversations & physician-created rubrics to assess AI performance, but...
Introducing HealthBench
HealthBench is a new evaluation benchmark for AI in healthcare which evaluates models in realistic scenarios. Built with input from 250+ physicians, it aims to provide a shared standard for model perf...
openai.com
May 14, 2025 at 3:37 PM
Reposted by Justin Flechsig
Becker's asked 67 hospital IT leaders about their top investments for 2025. 94% mentioned AI -- with over 75% ranking it first. Only two highlighted safety & observability as key priorities. What's truly safe and helpful for frontline providers? We analyzed exec's top investments and rationale:
AI takes center stage: What 67 healthcare leaders are investing in this year - Becker's Hospital Review | Healthcare News & Analysis
budget, investment, AI, LLM, documentation AI, ambient AI, generative AI, RCM, cybersecurity, money, spend
www.beckershospitalreview.com
April 2, 2025 at 6:54 PM
AI development is moving FAST -- what was cutting-edge two weeks ago is outdated today. But with real progress comes a flood of vaporware. Hospitals adopting AI need leadership (or a partner) who truly understands observability and safety. Otherwise, it's just adding complexity and wasting time.
AI adoption is reshaping healthcare leadership. The rise of Chief AI Officers and the evolution of Chief Data Officers signal a shift--but without strong governance, will AI drive better care or just add complexity? Safe, transparent AI isn’t optional. www.beckershospitalreview.com/hospital-man...
How the fastest-growing C-suite role is evolving at health systems
Discover how the role of chief data officer is evolving in hospitals and health systems, from managing data for value-based care to collaborating with AI offic
www.beckershospitalreview.com
March 7, 2025 at 3:37 PM
Too many organizations see AI as a way to replace clinicians. That’s the wrong approach. AI should work with providers—not push them out.
AI in cancer screening shouldn’t replace radiologists—it should work with them. A Nature Medicine study shows AI-assisted reading improves detection without increasing unnecessary recalls. Smarter screening, better care. www.nature.com/articles/s41...
Nature
Nature is the foremost international weekly scientific journal in the world and is the flagship journal for Nature Portfolio. It publishes the finest ...
nature.com
February 12, 2025 at 5:36 PM