医学
医疗急救
紧急医疗服务
诊断准确性
急诊医学
急诊分诊台
重症监护医学
放射科
作者
Eric D. Miller,Jeffrey Michael Franc,Attila J. Hertelendy,Fadi Issa,Alexander Hart,Christina A. Woodward,Bradford A. Newbury,Kiera Newbury,Dana Mathew,Kimberly Whitten-Chung,Eric Bauer,Amalia Voskanyan,Gregory R. Ciottone
标识
DOI:10.1080/10903127.2025.2460775
摘要
While ambulance transport decisions guided by artificial intelligence (AI) could be useful, little is known of the accuracy of AI in making patient diagnoses based on the pre-hospital patient care report (PCR). The primary objective of this study was to assess the accuracy of ChatGPT (OpenAI, Inc., San Francisco, CA, USA) to predict a patient's diagnosis using the PCR by comparing to a reference standard assigned by experienced paramedics. The secondary objective was to classify cases where the AI diagnosis did not agree with the reference standard as paramedic correct, ChatGPT correct, or equally correct. This diagnostic accuracy study used a zero-shot learning model and greedy decoding. A convenience sample of PCRs from paramedic students was analyzed by an untrained ChatGPT-4 model to determine the single most likely diagnosis. A reference standard was provided by an experienced paramedic reviewing each PCR and giving a differential diagnosis of three items. A trained prehospital professional assessed the ChatGPT diagnosis as concordant or non-concordant with one of the three paramedic diagnoses. If non-concordant, two board-certified emergency physicians independently decided if the ChatGPT or the paramedic diagnosis was more likely to be correct. ChatGPT-4 diagnosed 78/104 (75.0%) of PCRs correctly (95% confidence interval: 65.3-82.7%). Among the 26 cases of disagreement, judgment by the emergency physicians was that in 6/26 (23.0%) the paramedic diagnosis was more likely to be correct. There was only one case of the 104 (0.96%) where transport decisions based on the AI guided diagnosis would have been potentially dangerous to the patient (under-triage). In this study, overall accuracy of ChatGPT to diagnose patients based on their emergency medical services PCR was 75.0%. In cases where the ChatGPT diagnosis was considered less likely than paramedic diagnosis, most commonly the AI diagnosis was more critical than the paramedic diagnosis-potentially leading to over-triage. The under-triage rate was <1%.
科研通智能强力驱动
Strongly Powered by AbleSci AI