Masud Rahman
masud99r.bsky.social
Masud Rahman
@masud99r.bsky.social
Our JMIR paper on AI (VLM/LLM) for burn diagnosis using EMR is now available online!
New JMIR MedInform: AI-Driven Integrated System for Burn Depth Prediction With Electronic #medical Records: Algorithm Development and Validation
AI-Driven Integrated System for Burn Depth Prediction With Electronic #medical Records: Algorithm Development and Validation
Background: Burn injuries represent a significant clinical challenge due to the complexity of accurately assessing burn depth, which directly influences the course of treatment and #patient outcomes. Traditional diagnostic methods primarily rely on visual inspection by experienced burn surgeons. Studies report diagnostic accuracies of around 76% for experts, dropping to nearly 50% for less experienced clinicians. Such inaccuracies can result in suboptimal clinical decisions—delaying vital surgical interventions in severe cases or initiating unnecessary treatments for superficial burns. This diagnostic variability not only compromises #patient care but also strains #healthcare resources and increases the likelihood of adverse outcomes. Hence, a more consistent and precise approach to burn classification is urgently needed. Objective: The objective is to determine whether a multi-modal integrated AI system for accurate classification of burn depth, can preserve diagnostic accuracy and provide an important resource when utilized as part of the Electronic #medical Record (EMR). Methods: This study employed a novel multi-modal AI system, integrating #digital photographs and ultrasound Tissue Doppler Imaging (TDI) data to accurately assess burn depth. These imaging modalities were accessed and processed through an Electronic #medical Record (EMR) system, enabling real-time data retrieval and AI-assisted evaluation. TDI was instrumental in evaluating the biomechanical properties of subcutaneous tissues, using color-coded images to identify burn-induced changes in tissue stiffness and elasticity. The collected imaging data was uploaded to the EMR system (DrChrono), where it was processed by a vision-language model built on GPT-4 architecture. This model received expert-formulated prompts describing how to interpret both #digital and TDI images, guiding the AI in making explainable classifications. Results: This study evaluated whether a multi-modal AI classifier, designed to identify 1st-, 2nd-, and 3rd-degree burns, could be effectively applied to imaging data stored within an EMR system. The classifier achieved an overall accuracy of 84.38%, significantly surpassing human performance benchmarks typically cited in the literature. This highlights the potential of the AI model to serve as a robust clinical decision-support tool, especially in settings lacking highly specialized expertise. In addition to accuracy, the classifier demonstrated strong performance across multiple evaluation metrics. The classifier’s ability to distinguish between burn severities was further validated by AUROC: 0.97 for 1st-degree, 0.96 for 2nd-degree, and a perfect 1.00 for 3rd-degree burns, each with narrow 95% confidence intervals. Conclusions: The storage of multi-modal imaging data within the EMR, along with the ability for post hoc analysis by AI algorithms, offers significant advancements in burn care, enabling real-time burn depth prediction on currently available data. Using #digital photos for superficial burns, easily diagnosed through physical exams, reduces reliance on TDI, while TDI helps distinguish deep second- and third-degree burns, enhancing diagnostic efficiency. Clinical Trial: ClinicalTrials.gov NCT05167461; https://clinicaltrials.gov/study/NCT05167461
dlvr.it
August 19, 2025 at 6:51 PM