Comparison to 3D fine-tuned models:
🔹SAM 2 outperforms SAM-Med3D on most datasets despite no 3D-specific training
🔹High-resolution propagation is a key advantage
Our paper is available at arxiv.org/pdf/2408.00756 and the journal version will be released soon. Stay tuned!
Comparison to 3D fine-tuned models:
🔹SAM 2 outperforms SAM-Med3D on most datasets despite no 3D-specific training
🔹High-resolution propagation is a key advantage
Our paper is available at arxiv.org/pdf/2408.00756 and the journal version will be released soon. Stay tuned!
Interactive refinement:
🔹Adding point prompts often destabilizes predictions
🔹Full-mask corrections provide steady performance gains
Interactive refinement:
🔹Adding point prompts often destabilizes predictions
🔹Full-mask corrections provide steady performance gains
Ground-truth vs prompts:
🔹Ground-truth masks on a few slices improve overall volume accuracy
🔹Propagation quality drops when volumes extend beyond the effective memory window
Ground-truth vs prompts:
🔹Ground-truth masks on a few slices improve overall volume accuracy
🔹Propagation quality drops when volumes extend beyond the effective memory window
3D propagation behavior:
🔹Performance depends heavily on slice selection, prompt type, and propagation direction
🔹Bidirectional propagation is consistently more stable and accurate than forward-only.
3D propagation behavior:
🔹Performance depends heavily on slice selection, prompt type, and propagation direction
🔹Bidirectional propagation is consistently more stable and accurate than forward-only.
Our key findings are that:
Single-slice performance:
🔹SAM 2 matches SAM when segmenting slices independently
🔹Box prompts remain the most reliable prompting strategy.
Our key findings are that:
Single-slice performance:
🔹SAM 2 matches SAM when segmenting slices independently
🔹Box prompts remain the most reliable prompting strategy.
To answer this, we conducted the first large-scale evaluation of SAM 2 across 21 datasets spanning MRI, CT, PET-CT, X-ray, ultrasound, and surgical videos, covering 24 segmentation tasks and thousands of annotated volumes.
To answer this, we conducted the first large-scale evaluation of SAM 2 across 21 datasets spanning MRI, CT, PET-CT, X-ray, ultrasound, and surgical videos, covering 24 segmentation tasks and thousands of annotated volumes.
Our code and pre-trained weights are available at github.com/mazurowski-l... and the paper is available at: arxiv.org/pdf/2506.22467
Our code and pre-trained weights are available at github.com/mazurowski-l... and the paper is available at: arxiv.org/pdf/2506.22467
🔹Highly accurate muscle volume estimates with strong correlation to ground truth
🔹Enables fast, automated muscle quantification suitable for large-scale studies
🔹Highly accurate muscle volume estimates with strong correlation to ground truth
🔹Enables fast, automated muscle quantification suitable for large-scale studies
Key findings:
🔹High segmentation accuracy on both common and challenging MRI sequences
🔹Robust performance across all body regions, demographics, and imaging artifacts
🔹Superior accuracy compared to nnU-Net, fine-tuned SAM, and other strong baselines
Key findings:
🔹High segmentation accuracy on both common and challenging MRI sequences
🔹Robust performance across all body regions, demographics, and imaging artifacts
🔹Superior accuracy compared to nnU-Net, fine-tuned SAM, and other strong baselines
We introduce SegmentAnyMuscle, the first publicly available model that segments muscles across major anatomical regions and diverse MRI sequences, built on one of the largest labeled muscle MRI datasets to date—362 MRIs spanning 11 body regions and 19 sequence types.
We introduce SegmentAnyMuscle, the first publicly available model that segments muscles across major anatomical regions and diverse MRI sequences, built on one of the largest labeled muscle MRI datasets to date—362 MRIs spanning 11 body regions and 19 sequence types.
Most existing approaches focus on a single region or narrow sequence sets, making broad muscle assessment difficult.
Most existing approaches focus on a single region or narrow sequence sets, making broad muscle assessment difficult.
Interestingly, targeted fine-tuning fails to resolve this issue, indicating a fundamental challenge for SFMs
Check out the links below for more:
🔗 arXiv: arxiv.org/abs/2412.04243
💻 Code: github.com/mazurowski-l...
Interestingly, targeted fine-tuning fails to resolve this issue, indicating a fundamental challenge for SFMs
Check out the links below for more:
🔗 arXiv: arxiv.org/abs/2412.04243
💻 Code: github.com/mazurowski-l...
We attribute these failures to SFMs misinterpreting local structure as global texture, resulting in over-segmentation or difficulty distinguishing objects from similar backgrounds!
We attribute these failures to SFMs misinterpreting local structure as global texture, resulting in over-segmentation or difficulty distinguishing objects from similar backgrounds!
We propose novel metrics to quantify 2 key features of objects: tree-likeness and textural separability. Our study reveals strong correlations between these metrics and segmentation performance across SAM, SAM 2, and HQ-SAM, over a range of real and controlled synthetic data
We propose novel metrics to quantify 2 key features of objects: tree-likeness and textural separability. Our study reveals strong correlations between these metrics and segmentation performance across SAM, SAM 2, and HQ-SAM, over a range of real and controlled synthetic data
Ever wonder how and why they struggle with certain objects, like delicate tree branches, retinal blood vessels, or low-contrast/concealed objects?
In this work, we systematically investigate what makes objects "hard" to segment.
Ever wonder how and why they struggle with certain objects, like delicate tree branches, retinal blood vessels, or low-contrast/concealed objects?
In this work, we systematically investigate what makes objects "hard" to segment.
Segmentation foundation models (SFMs) like SAM and SAM 2 have revolutionized computer vision with their impressive zero-shot capabilities for segmenting objects in images, but they are not perfect.
Segmentation foundation models (SFMs) like SAM and SAM 2 have revolutionized computer vision with their impressive zero-shot capabilities for segmenting objects in images, but they are not perfect.
CSpineSeg is published on Medical Imaging and Data Resource Center, data.midrc.org/discovery/H6.... The full data descriptor has just been published with Springer Nature in Scientific Data. Read here: rdcu.be/eMWgW.
CSpineSeg is published on Medical Imaging and Data Resource Center, data.midrc.org/discovery/H6.... The full data descriptor has just been published with Springer Nature in Scientific Data. Read here: rdcu.be/eMWgW.
CSpineSeg can serve as a valuable resource for the development and validation of deep learning models, as well as for clinical investigators focusing on the diagnosis of cervical spine diseases.
CSpineSeg can serve as a valuable resource for the development and validation of deep learning models, as well as for clinical investigators focusing on the diagnosis of cervical spine diseases.
A deep-learning-based segmentation model was trained on labeled MRIs, internally validated with robust performance (DSC>90%), and subsequently automatically labeled unannotated MRIs.
A deep-learning-based segmentation model was trained on labeled MRIs, internally validated with robust performance (DSC>90%), and subsequently automatically labeled unannotated MRIs.
We invited 6 board-certified radiologists to annotate vertebral bodies and intervertebral discs on 491 MRIs (~40% of the data).
We invited 6 board-certified radiologists to annotate vertebral bodies and intervertebral discs on 491 MRIs (~40% of the data).
CSpineSeg fills a crucial need for cervical spine imaging research on MRI by providing comprehensive vertebral body and intervertebral discs segmentation. It includes 1255 sagittal T2-weighted cervical spine MRIs collected from 1232 patients at Duke University Health System.
CSpineSeg fills a crucial need for cervical spine imaging research on MRI by providing comprehensive vertebral body and intervertebral discs segmentation. It includes 1255 sagittal T2-weighted cervical spine MRIs collected from 1232 patients at Duke University Health System.