误传
医学
聊天机器人
阅读(过程)
可读性
人工智能
计算机科学
计算机安全
政治学
法学
程序设计语言
作者
David Musheyev,Alexander Pan,Stacy Loeb,Abdo Kabarriti
标识
DOI:10.1016/j.eururo.2023.07.004
摘要
Artificial intelligence (AI) chatbots are becoming a popular source of information but there are limited data on the quality of information on urological malignancies that they provide. Our objective was to characterize the quality of information and detect misinformation about prostate, bladder, kidney, and testicular cancers from four AI chatbots: ChatGPT, Perplexity, Chat Sonic, and Microsoft Bing AI. We used the top five search queries related to prostate, bladder, kidney, and testicular cancers according to Google Trends from January 2021 to January 2023 and input them into the AI chatbots. Responses were evaluated for quality, understandability, actionability, misinformation, and readability using published instruments. AI chatbot responses had moderate to high information quality (median DISCERN score 4 out of 5, range 2-5) and lacked misinformation. Understandability was moderate (median Patient Education Material Assessment Tool for Printable Materials [PEMAT-P] understandability 66.7%, range 44.4-90.9%) and actionability was moderate to poor (median PEMAT-P actionability 40%, range 0-40%The responses were written at a fairly difficult reading level. AI chatbots produce information that is generally accurate and of moderate to high quality in response to the top urological malignancy-related search queries, but the responses lack clear, actionable instructions and exceed the reading level recommended for consumer health information. PATIENT SUMMARY: Artificial intelligence chatbots produce information that is generally accurate and of moderately high quality in response to popular Google searches about urological cancers. However, their responses are fairly difficult to read, are moderately hard to understand, and lack clear instructions for users to act on.
科研通智能强力驱动
Strongly Powered by AbleSci AI