聊天机器人
背景(考古学)
多项选择
医学教育
计算机科学
心理学
人工智能
医学
生物
内科学
古生物学
显著性差异
作者
Oliver Kleinig,Joshua G. Kovoor,Aashray Gupta,Stephen Bacchi
出处
期刊:AJGP
[Royal Australian College of General Practitioners]
日期:2023-12-01
卷期号:52 (12): 863-865
被引量:2
标识
DOI:10.31128/ajgp-02-23-6708
摘要
The potential of artificial intelligence in medical practice is increasingly being investigated. This study aimed to examine OpenAI's ChatGPT in answering medical multiple choice questions (MCQ) in an Australian context.We provided MCQs from the Australian Medical Council's (AMC) medical licencing practice examination to ChatGPT. The chatbot's responses were graded using AMC's online portal. This experiment was repeated twice.ChatGPT was moderately accurate in answering the questions, achieving a score of 29/50. It was able to generate answer explanations to most questions (45/50). The chatbot was moderately consistent, providing the same overall answer to 40 of the 50 questions between trial runs.The moderate accuracy of ChatGPT demonstrates potential risks for both patients and physicians using this tool. Further research is required to create more accurate models and to critically appraise such models.
科研通智能强力驱动
Strongly Powered by AbleSci AI