Yuri Quintana
@yuriquintana.com
1.9K followers 3.6K following 310 posts
Chief Clinical Informatics @ BIDMC & Faculty Harvard Medical School, Chief of @DCINetwork.org #MedicalInformatics #AI #DigitalHealth #MobileHealth #LearningHealthSystems https://www.LinkedIn.com/in/yuriquintana http://www.yuriquintana.com
Posts Media Videos Starter Packs
yuriquintana.com
New from JAMA: “AI, Health, and Health Care Today and Tomorrow: The JAMA Summit Report on Artificial Intelligence.”
insights from leading clinicians, researchers, technologists, and policy experts on how to safely and effectively integrate AI into clinical practice
jamanetwork.com/journals/jam...
AI, Health, and Health Care Today and Tomorrow
This Special Communication discusses how health and health care artificial intelligence (AI) should be developed, evaluated, regulated, disseminated, and monitored.
jamanetwork.com
yuriquintana.com
China's lead in robot assembly is incredible. Manufacturing will not return to the USA in the form of new human jobs; New manufacturing will be mostly robotic automation. Any human manual labor that can be automated will likely be automated.
www.telegraph.co.uk/business/202...
yuriquintana.com
Google’s TUMIX: Multi-agent AI that organizes itself!
+17.4% reasoning gain at 50% cost.
Paper: arxiv.org/abs/2510.01279
Video: youtu.be/tx6WjV7TqUg?...
dramatically improves reasoning through multi-agent coordination, rather than larger model or training.
#TUMIX #GoogleAI #Medsky #AI
TUMIX: Multi-Agent Test-Time Scaling with Tool-Use Mixture
While integrating tools like Code Interpreter and Search has significantly enhanced Large Language Model (LLM) reasoning in models like ChatGPT Agent and Gemini-Pro, practical guidance on optimal tool...
arxiv.org
yuriquintana.com
Has anyone done a cost analysis comparing on-prem #AI centers vs. cloud? Even with donated hardware, maintenance, energy, and upgrade costs can add up fast. Over 5–10 years, is on-prem ever more cost-effective? Looking for studies on which AI workloads make sense to keep in-house vs in cloud #MedSky
yuriquintana.com
MIT "Fundamentally, the premise of the document is inconsistent with our core belief that scientific funding should be based on scientific merit alone." www.nytimes.com/2025/10/10/u...
M.I.T. Rejects a White House Offer for Special Funding Treatment
www.nytimes.com
yuriquintana.com
How should AI agents support—not undermine—our autonomy? Philipp Koralus (Oxford) proposes a philosophic turn for AI: shifting from centralized nudging & digital rhetoric to decentralized, truth-seeking dialogue.
arxiv.org/abs/2504.18601

#AI #Ethics #Autonomy #Philosophy #Medsky
The Philosophic Turn for AI Agents: Replacing centralized digital rhetoric with decentralized truth-seeking
In the face of rapidly advancing AI technology, individuals will increasingly rely on AI agents to navigate life's growing complexities, raising critical concerns about maintaining both human agency a...
arxiv.org
Reposted by Yuri Quintana
yuriquintana.com
Less is More: Recursive Reasoning with Tiny Networks
Alexia Jolicoeur-Martineau (Samsung SAIL) introduces the Tiny Recursive Model (TRM). Just 7M parameters, yet achieves:
45% test accuracy on ARC-AGI-1
8% on ARC-AGI-2
Outperforming many LLMs with <0.01% of their size.
arxiv.org/abs/2510.04871
Less is More: Recursive Reasoning with Tiny Networks
Hierarchical Reasoning Model (HRM) is a novel approach using two small neural networks recursing at different frequencies. This biologically inspired method beats Large Language models (LLMs) on hard ...
arxiv.org
yuriquintana.com
New paper on AI agency.
LIMI: Less is More for Agency shows that building autonomous AI agents doesn’t require massive datasets, just carefully curated demonstrations.
With only 78 samples, LIMI outperforms models trained on 10,000+.
arxiv.org/abs/2509.17567

#AI #Agency #AutonomousAgents #LIMI
LIMI: Less is More for Agency
We define Agency as the emergent capacity of AI systems to function as autonomous agents actively discovering problems, formulating hypotheses, and executing solutions through self-directed engagement...
arxiv.org
yuriquintana.com
#DCIAI2025 A great first day to the DCI Network AI Conference www.dcinetwork.org/aiconf2025 Through The Noise: What “Works, What Lasts, And What Matters In Healthcare AI”
yuriquintana.com
The @JointCommission + @CoalitionHealthAI dropped new guidance on Responsible Use of AI in Healthcare, laying out 7 core principles for safe, ethical AI adoption. Read it here: digitalassets.jointcommission.org/api/public/c...
digitalassets.jointcommission.org
yuriquintana.com
Only 5 on-site spots remain for DCI Network Conference: Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI, Sept 25–27, 2025 Harvard Medical School & Beth Israel Deaconess Medical Center (Boston, MA). On-site registration closes today.
www.dcinetwork.org/aiconf25
Signal Through the Noise: What Works, What Lasts, and What Matters in Healthcare AI
The DCI Network brings together top thought leaders and decision-makers from the private sector, government, non-profits, and academia to create digital health consortia to accelerate innovation and g...
www.dcinetwork.org
yuriquintana.com
Paper from
MIT CSAIL & Microsoft introduce PDDL-INSTRUCT, a chain-of-thought instruction tuning method that teaches LLMs to perform symbolic planning. 94% planning accuracy, a 66% improvement over baseline models. train models to reason explicitly about action preconditions
arxiv.org/abs/2509.13351
Teaching LLMs to Plan: Logical Chain-of-Thought Instruction Tuning for Symbolic Planning
Large language models (LLMs) have demonstrated impressive capabilities across diverse tasks, yet their ability to perform structured symbolic planning remains limited, particularly in domains requirin...
arxiv.org
yuriquintana.com
New paper alert: “Is In-Context Learning Learning?” explores whether autoregressive LLMs truly “learn” tasks via in-context learning (ICL) or just rely on prior knowledge. Authors argue mathematically that ICL qualifies as learning, but back it with massive empirical tests arxiv.org/pdf/2509.10414
arxiv.org
yuriquintana.com
this new survey on Retrieval and Structuring Augmented Generation (RAS) with LLMs dives deep into how RAG + SAG can fix hallucinations and boost real-world AI apps. Must-read for anyone in NLP or knowledge discovery. #AI #LLMs #RAG
arxiv.org/pdf/2509.10697
arxiv.org
yuriquintana.com
Google paper's
arxiv.org/pdf/2201.11903.
shows that simply asking large language models to "think step by step" dramatically improves their reasoning abilities.
Researchers found this technique boosts accuracy on complex tasks.
See also the YouTube video. youtu.be/NBIRzSGHFro?...
arxiv.org
yuriquintana.com
Only a few spots left! Signal Through The Noise: What Works, What Lasts, and What Matters in Healthcare AI, September 25–27, 2025, Harvard University & BIDMC
Keynotes: Raghav Mani (NVIDIA),
John Brownstein (Boston Children’s Hospital)
Leo Anthony Celi (MIT)
www.dcinetwork.org/aiconf25
yuriquintana.com
AI chatbots giving inconsistent answers? Blame “nondeterminism” from tiny math errors in LLM processing. Post from Thinking Machines reveals the cause (batch handling) & fix: batch-invariant ops for reliable outputs!
Boosts AI research & safety. Read: thinkingmachines.ai/blog/defeati...
Defeating Nondeterminism in LLM Inference
Reproducibility is a bedrock of scientific progress. However, it’s remarkably difficult to get reproducible results out of large language models. For example, you might observe that asking ChatGPT the...
thinkingmachines.ai
yuriquintana.com
We should not have to live in a country where the Government can seize anyone who looks Latino, speaks Spanish, and appears to work a low wage job,” Sotomayor wrote in a dissenting opinion joined by the court’s other two liberal justices, Justices Elena Kagan & Ketanji Brown Jackson.Source:The Hill
Sotomayor rails against LA immigration raid ruling in dissent
Justice Sonia Sotomayor on Monday blasted the Supreme Court’s decision to lift restrictions on Los Angeles-area immigration stops based on criteria like speaking Spanish or working in a certa…
thehill.com
yuriquintana.com
China unveils SpikingBrain1.0 a brain-like AI model.
Up to 100X faster, trained on <2% of the data.
Ultra low energy, hardware independent.
Unlike GPT/LLaMA it uses localized attention, mimicking the brain’s focus on recent context.
Potential: drones, wearables, edge AI, energy-critical apps.
SpikingBrain Technical Report: Spiking Brain-inspired Large Models
Mainstream Transformer-based large language models face major efficiency bottlenecks: training computation scales quadratically with sequence length, and inference memory grows linearly, limiting long...
arxiv.org
Reposted by Yuri Quintana
nytimes.com
U.S. employers added nearly 1 million fewer jobs than originally reported for the year through March. It's the latest sign that the labor market may be weaker than it initially appeared.
Job Growth Revised Down by Nearly a Million, Updated BLS Data Shows
Preliminary annual revisions could add to political pressure on the agency that produces the data.
nyti.ms