利克特量表
Microsoft excel
聊天机器人
质量(理念)
点(几何)
显著性差异
比例(比率)
医学
心理学
计算机科学
人工智能
数学
地图学
地理
发展心理学
哲学
几何学
认识论
内科学
操作系统
作者
Can Arslan,Kaan Kahya,Emre Cesur,Derya Germeç Çakan
出处
期刊:Australasian Orthodontic Journal
[Exeley Inc]
日期:2024-01-01
卷期号:40 (1): 149-157
被引量:6
标识
DOI:10.2478/aoj-2024-0012
摘要
Abstract Introduction In recent times, chatbots have played an increasing and noteworthy role in the field of medical practice. The present research was conducted to evaluate the accuracy of the responses provided by ChatGPT and BARD, two of the most utilised chatbot programs, when interrogated regarding orthodontics. Materials and methods Twenty-four popular questions about conventional braces, clear aligners, orthognathic surgery, and orthodontic retainers were chosen for the study. When submitted to the ChatGPT and Google BARD platforms, an experienced orthodontist and an orthodontic resident rated the responses to the questions using a five-point Likert scale, with five indicating evidence-based information, four indicating adequate information, three indicating insufficient information, two indicating incorrect information, and one indicating no response. The results were recorded in Microsoft Excel for comparison and analysis. Results No correlation was found between the ChatGPT and Google BARD scores and word counts. However, a moderate to significant relationship was observed between the scores and several listed references. No significant association was found between the number of words and references, and a statistically significant difference was observed in both investigators’ numerical rating scales using the AI tools ( p = 0.014 and p = 0.030, respectively). Conclusion Generally, ChatGPT and BARD provide satisfactory responses to common orthodontic inquiries that patients might ask. ChatGPT’s answers marginally surpassed those of Google BARD in quality.
科研通智能强力驱动
Strongly Powered by AbleSci AI