聊天机器人
医学
检查表
心肺复苏术
互联网隐私
医疗急救
急救
医学教育
万维网
复苏
计算机科学
心理学
急诊医学
认知心理学
作者
Alexei Birkun,Adhish Gautam
标识
DOI:10.1016/j.cpcardiol.2023.102048
摘要
The ability of the cutting-edge large language model-powered chatbots to generate human-like answers to user questions hypothetically could be utilized for providing real-time advice on first aid for witnesses of cardiovascular emergencies. This study aimed to evaluate quality of the chatbot responses to inquiries on help in heart attack. The study simulated interrogation of the new Bing chatbot (Microsoft Corporation, USA) with the "heart attack what to do" prompt coming from 3 countries, the Gambia, India and the USA. The chatbot responses (20 per country) were evaluated for congruence with the International First Aid, Resuscitation, and Education Guidelines 2020 using a checklist. For all user inquiries, the chatbot provided answers containing some guidance on first aid. However, the responses commonly left out some potentially life-saving instructions, for instance to encourage the person to stop physical activity, to take antianginal medication, or to start cardiopulmonary resuscitation for unresponsive abnormally breathing person. Mean percentage of the responses having full congruence with the checklist criteria varied from 7.3 for India to 16.8 for the USA. A quarter of responses for the Gambia and the USA, and 45.0% for India contained superfluous guidelines-inconsistent directives. The chatbot advice on help in heart attack has omissions, inaccuracies and misleading instructions, and therefore the chatbot cannot be recommended as a credible source of information on first aid. Active research and organizational efforts are needed to mitigate the risk of uncontrolled misinformation and establish measures for guaranteeing trustworthiness of the chatbot-mediated counseling.
科研通智能强力驱动
Strongly Powered by AbleSci AI