可用性
医学
背景(考古学)
医学物理学
计算机科学
生物
古生物学
人机交互
作者
Anna Stroop,Tabea Stroop,Samer Zawy Alsofy,Moritz Wegner,Makoto Nakamura,Ralf Stroop
摘要
Aims This study aimed to evaluate the accuracy and completeness of GPT‐4, a large language model, in answering clinical pharmacological questions related to pain therapy, with a focus on its potential as a tool for delivering patient‐facing medical information. The objective was to assess its reliability in delivering medical information in the context of pain management. Methods A cross‐sectional survey‐based study was conducted with healthcare professionals, including physicians and pharmacists. Participants submitted up to 8 clinical pharmacology questions on pain management, focusing on drug interactions, dosages and contraindications. GPT‐4's responses were evaluated based on comprehensibility, detail, satisfaction, medical–pharmacological accuracy and completeness. Additionally, responses were compared to the German Drug Directory to assess their accuracy. Results The majority of participants (99%) found GPT‐4's responses comprehensible, while 84% considered the information detailed enough. Overall satisfaction was high, with 93% expressing satisfaction, and 96% deemed the responses medically accurate. However, only 63% rated the information as complete, with some identifying gaps in pharmacokinetics and drug interaction data. Usability was evaluated as good to excellent, with a System Usability Scale score of 83.38 (± 10.26). Conclusion GPT‐4 demonstrates potential as a tool for delivering medical information, particularly in pain management. However, limitations such as incomplete pharmacological data and the potential for contextual carryover in follow‐up questions suggest that further refinement is necessary. Developing specialized artificial intelligence tools that integrate real‐time pharmacological databases could improve accuracy and reliability for clinical decision‐making.
科研通智能强力驱动
Strongly Powered by AbleSci AI