We build healthcare foundation models for longitudinal EHRs, clinical text, and multimodal data. Work with massive real-world datasets (~4M patients) and large-scale GPU compute.
📅 Start Summer–Fall 2026
🔗 Apply web.stanford.edu/~jfries/join...
Peter Brodeur MD, MA. & Liam McCoy, MD, MSc.
Thursday, January 15th, 2026
12:00 to 1:00 pm PST
Live Stream: stanford.zoom.us/j/9788759601...
Webinar ID: 978 8759 6012
Webinar Passcode: 420642
Peter Brodeur MD, MA. & Liam McCoy, MD, MSc.
Thursday, January 15th, 2026
12:00 to 1:00 pm PST
Live Stream: stanford.zoom.us/j/9788759601...
Webinar ID: 978 8759 6012
Webinar Passcode: 420642
We build healthcare foundation models for longitudinal EHRs, clinical text, and multimodal data. Work with massive real-world datasets (~4M patients) and large-scale GPU compute.
📅 Start Summer–Fall 2026
🔗 Apply web.stanford.edu/~jfries/join...
We build healthcare foundation models for longitudinal EHRs, clinical text, and multimodal data. Work with massive real-world datasets (~4M patients) and large-scale GPU compute.
📅 Start Summer–Fall 2026
🔗 Apply web.stanford.edu/~jfries/join...
I’ll hold a joint appointment in DBDS and the Division of Computational Medicine.
I’ll hold a joint appointment in DBDS and the Division of Computational Medicine.
@stanford-cancer.bsky.social
@stanford-cancer.bsky.social
Swing by Poster #154 (Session C) on Saturday, Aug 16 to check out FactEHR — our new benchmark for evaluating factuality in clinical notes. As LLMs enter the clinic, we need rigorous, source-grounded tools to measure what they get right (and wrong).
We’re excited to release FactEHR — a new benchmark to evaluate factuality in clinical notes. As generative AI enters the clinic, we need rigorous, source-grounded tools to measure what these models get right — and what they don’t. 🏥 🤖
Swing by Poster #154 (Session C) on Saturday, Aug 16 to check out FactEHR — our new benchmark for evaluating factuality in clinical notes. As LLMs enter the clinic, we need rigorous, source-grounded tools to measure what they get right (and wrong).
🖼️ "Time-to-Event Pretraining for 3D Medical Imaging"
👉 Hall 3+2B #23
📍 Sat 26 Apr, 10 AM–12:30 PM
🔗 iclr.cc/virtual/2025...
🖼️ "Time-to-Event Pretraining for 3D Medical Imaging"
👉 Hall 3+2B #23
📍 Sat 26 Apr, 10 AM–12:30 PM
🔗 iclr.cc/virtual/2025...
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...
Learn more on our HAI blog:
hai.stanford.edu/news/advanci...
We introduce 𝗧𝗧𝗘 𝗽𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴, using EHR-linked imaging to improve AI-driven prognosis—essential for assessing disease progression.
🔗 Paper: arxiv.org/abs/2411.09361
We introduce 𝗧𝗧𝗘 𝗽𝗿𝗲𝘁𝗿𝗮𝗶𝗻𝗶𝗻𝗴, using EHR-linked imaging to improve AI-driven prognosis—essential for assessing disease progression.
🔗 Paper: arxiv.org/abs/2411.09361
We introduce time-to-event pretraining for imaging, leveraging longitudinal EHRs to provide temporal supervision and enhance disease prognosis performance.
🔗 Paper: arxiv.org/abs/2411.09361
We introduce time-to-event pretraining for imaging, leveraging longitudinal EHRs to provide temporal supervision and enhance disease prognosis performance.
🔗 Paper: arxiv.org/abs/2411.09361