Utilizing ChatGPT as a scientific reasoning engine to differentiate conflicting evidence and summarize challenges in controversial clinical questions

相关性(法律) 清晰 一致性(知识库) 计算机科学 确定性 危害 数据科学 召回 科学证据 演绎推理 管理科学 工程伦理学 心理学 认识论 认知心理学 人工智能 社会心理学 工程类 哲学 经济 化学 法学 生物化学 政治学
作者
Shiyao Xie,Wenjing Zhao,Guanghui Deng,Guohua He,Na He,Zhenhua Lü,Weihua Hu,Mingming Zhao,Jian Du
出处
期刊:Journal of the American Medical Informatics Association [Oxford University Press]
卷期号:31 (7): 1551-1560 被引量:2
标识
DOI:10.1093/jamia/ocae100
摘要

Abstract Objective Synthesizing and evaluating inconsistent medical evidence is essential in evidence-based medicine. This study aimed to employ ChatGPT as a sophisticated scientific reasoning engine to identify conflicting clinical evidence and summarize unresolved questions to inform further research. Materials and Methods We evaluated ChatGPT’s effectiveness in identifying conflicting evidence and investigated its principles of logical reasoning. An automated framework was developed to generate a PubMed dataset focused on controversial clinical topics. ChatGPT analyzed this dataset to identify consensus and controversy, and to formulate unsolved research questions. Expert evaluations were conducted 1) on the consensus and controversy for factual consistency, comprehensiveness, and potential harm and, 2) on the research questions for relevance, innovation, clarity, and specificity. Results The gpt-4-1106-preview model achieved a 90% recall rate in detecting inconsistent claim pairs within a ternary assertions setup. Notably, without explicit reasoning prompts, ChatGPT provided sound reasoning for the assertions between claims and hypotheses, based on an analysis grounded in relevance, specificity, and certainty. ChatGPT’s conclusions of consensus and controversies in clinical literature were comprehensive and factually consistent. The research questions proposed by ChatGPT received high expert ratings. Discussion Our experiment implies that, in evaluating the relationship between evidence and claims, ChatGPT considered more detailed information beyond a straightforward assessment of sentimental orientation. This ability to process intricate information and conduct scientific reasoning regarding sentiment is noteworthy, particularly as this pattern emerged without explicit guidance or directives in prompts, highlighting ChatGPT’s inherent logical reasoning capabilities. Conclusion This study demonstrated ChatGPT’s capacity to evaluate and interpret scientific claims. Such proficiency can be generalized to broader clinical research literature. ChatGPT effectively aids in facilitating clinical studies by proposing unresolved challenges based on analysis of existing studies. However, caution is advised as ChatGPT’s outputs are inferences drawn from the input literature and could be harmful to clinical practice.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
更新
PDF的下载单位、IP信息已删除 (2025-6-4)

科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
壁虎君完成签到,获得积分10
刚刚
shunli完成签到,获得积分10
刚刚
一1完成签到 ,获得积分10
3秒前
wure10完成签到 ,获得积分10
3秒前
LL完成签到,获得积分10
5秒前
香蕉觅云应助李振博采纳,获得10
5秒前
QQ完成签到,获得积分10
6秒前
Salamenda完成签到,获得积分10
6秒前
mr完成签到 ,获得积分10
7秒前
8秒前
yukang完成签到,获得积分10
9秒前
感动的小鸽子完成签到,获得积分10
10秒前
Likz完成签到,获得积分10
10秒前
相信相信的力量完成签到,获得积分10
12秒前
山茶桂子完成签到,获得积分20
12秒前
王倩的老公完成签到 ,获得积分10
12秒前
rh完成签到,获得积分10
13秒前
梦丸完成签到 ,获得积分10
13秒前
13秒前
yeye应助yukang采纳,获得20
14秒前
xxxx发布了新的文献求助10
15秒前
16秒前
xiuxiu_27完成签到 ,获得积分10
16秒前
16秒前
16秒前
乐乐完成签到,获得积分10
16秒前
qqq完成签到 ,获得积分10
17秒前
美丽的仙人掌完成签到,获得积分10
18秒前
跳跃的语柔完成签到 ,获得积分10
18秒前
clock完成签到 ,获得积分10
19秒前
李振博发布了新的文献求助10
19秒前
20秒前
xiaxia42完成签到 ,获得积分10
24秒前
耀阳完成签到 ,获得积分10
25秒前
26秒前
加一完成签到,获得积分10
27秒前
凡迪亚比完成签到 ,获得积分10
27秒前
hitzwd完成签到,获得积分10
27秒前
默默完成签到 ,获得积分10
27秒前
27秒前
高分求助中
【请各位用户详细阅读此贴后再求助】科研通的精品贴汇总(请勿应助) 10000
Les Mantodea de Guyane: Insecta, Polyneoptera [The Mantids of French Guiana] 3000
徐淮辽南地区新元古代叠层石及生物地层 3000
The Mother of All Tableaux: Order, Equivalence, and Geometry in the Large-scale Structure of Optimality Theory 3000
Research on Disturbance Rejection Control Algorithm for Aerial Operation Robots 1000
Global Eyelash Assessment scale (GEA) 1000
Comparison analysis of Apple face ID in iPad Pro 13” with first use of metasurfaces for diffraction vs. iPhone 16 Pro 500
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 生物化学 物理 内科学 纳米技术 计算机科学 化学工程 复合材料 遗传学 基因 物理化学 催化作用 冶金 细胞生物学 免疫学
热门帖子
关注 科研通微信公众号,转发送积分 4048732
求助须知:如何正确求助?哪些是违规求助? 3586402
关于积分的说明 11395610
捐赠科研通 3313119
什么是DOI,文献DOI怎么找? 1822745
邀请新用户注册赠送积分活动 894690
科研通“疑难数据库(出版商)”最低求助积分说明 816466