可读性
医学
利克特量表
外科
心理学
发展心理学
哲学
语言学
作者
Yung Lee,Thomas H. Shin,Léa Tessier,Arshia Javidan,James J. Jung,Dennis Hong,Andrew T. Strong,Tyler McKechnie,Sarah Malone,David Jin,Matthew Kroh,Jerry T. Dang
标识
DOI:10.1016/j.soard.2024.03.011
摘要
BackgroundThe formulation of clinical recommendations pertaining to bariatric surgery is essential in guiding healthcare professionals. However, the extensive and continuously evolving body of literature in bariatric surgery presents considerable challenge for staying abreast of latest developments and efficient information acquisition. Artificial intelligence (AI) has the potential to streamline access to the salient points of clinical recommendations in bariatric surgery.ObjectiveThe study aims to appraise the quality and readability of AI-chat-generated answers to frequently asked clinical inquiries in the field of bariatric and metabolic surgery.SettingRemote.MethodsQuestion prompts inputted into AI large language models (LLMs) were created based on pre-existing clinical practice guidelines regarding bariatric and metabolic surgery. The prompts were queried into three LLMs: OpenAI ChatGPT-4, Microsoft Bing, and Google Bard. The responses from each LLM were entered into a spreadsheet for randomized and blinded duplicate review. Accredited bariatric surgeons in North America independently assessed appropriateness of each recommendation using a 5-point Likert scale. Scores of 4 and 5 were deemed appropriate, while scores of 1 to 3 indicated a lack of appropriateness. A Flesch Reading Ease (FRE) score was calculated to assess the readability of responses generated by each LLMs.ResultsThere was a significant difference between the three LLMs in their 5-point Likert scores, with mean values of 4.46 (SD 0.82), 3.89 (0.80), and 3.11 (0.72) for ChatGPT-4, Bard, and Bing (P<0.001). There was a significant difference between the three LLMs in the proportion of appropriate answers, with ChatGPT-4 at 85.7%, Bard at 74.3%, and Bing at 25.7% (P<0.001). The mean FRE scores for ChatGPT-4, Bard, and Bing, were 21.68 (SD 2.78), 42.89 (4.03), and 14.64 (5.09), respectively, with higher scores representing easier readability.ConclusionLLM-based AI chat models can effectively generate appropriate responses to clinical questions related to bariatric surgery, though the performance of different models can vary greatly. Therefore, caution should be taken when interpreting clinical information provided by LLMs, and clinician oversight is necessary to ensure accuracy. Future investigation is warranted to explore how LLMs might enhance healthcare provision and clinical decision-making in bariatric surgery.
科研通智能强力驱动
Strongly Powered by AbleSci AI