聊天机器人
医学
麻醉学
观察研究
中心(范畴论)
繁荣
医疗急救
万维网
麻醉
计算机科学
内科学
工程类
化学
环境工程
结晶学
作者
Sowmya M. Jois,S Rangalakshmi,Sowmya Madihalli Janardhan Iyengar,C M Mahesh,Lairenjam Deepa Devi,Arun Kumar Namachivayam
标识
DOI:10.4103/joacp.joacp_151_24
摘要
The field of anaesthesiology and perioperative medicine has explored advancements in science and technology, ensuring precision and personalized anesthesia plans. The surge in the usage of chat-generative pretrained transformer (Chat GPT) in medicine has evoked interest among anesthesiologists to assess its performance in the operating room. However, there is concern about accuracy, patient privacy and ethics. Our objective in this study assess whether Chat GPT can provide assistance in clinical decisions and compare them with those of resident anesthesiologists. In this cross-sectional study conducted at a teaching hospital, a set of 30 hypothetical clinical scenarios in the operating room were presented to resident anesthesiologists and Chat-GPT 4. The first five scenarios out of 30 were typed with three additional prompts in the same chat to determine if there was any detailing of answers. The responses were labeled and assessed by three reviewers not involved in the study. The interclass coefficient (ICC) values show variation in the level of agreement between Chat GPT and anesthesiologists. For instance, the ICC of 0.41 between A1 and Chat GPT indicates a moderate level of agreement, whereas the ICC of 0.06 between A2 and Chat GPT suggests a comparatively weaker level of agreement. In this study, it was found that there were variations in the level of agreement between Chat GPT and resident anesthesiologists' response in terms of accuracy and comprehensiveness of responses in solving intraoperative scenarios. The use of prompts improved the agreement of Chat GPT with anesthesiologists.
科研通智能强力驱动
Strongly Powered by AbleSci AI