可解释性
模式
机器学习
人工智能
计算机科学
再培训
图形
标杆管理
深度学习
数据科学
理论计算机科学
社会科学
社会学
业务
国际贸易
营销
作者
Ruth Johnson,Michelle M. Li,Ayush Noori,Owen Queen,Marinka Žitnik
出处
期刊:Annual review of biomedical data science
[Annual Reviews]
日期:2024-05-15
卷期号:7 (1): 345-368
被引量:8
标识
DOI:10.1146/annurev-biodatasci-110723-024625
摘要
In clinical artificial intelligence (AI), graph representation learning, mainly through graph neural networks and graph transformer architectures, stands out for its capability to capture intricate relationships and structures within clinical datasets. With diverse data—from patient records to imaging—graph AI models process data holistically by viewing modalities and entities within them as nodes interconnected by their relationships. Graph AI facilitates model transfer across clinical tasks, enabling models to generalize across patient populations without additional parameters and with minimal to no retraining. However, the importance of human-centered design and model interpretability in clinical decision-making cannot be overstated. Since graph AI models capture information through localized neural transformations defined on relational datasets, they offer both an opportunity and a challenge in elucidating model rationale. Knowledge graphs can enhance interpretability by aligning model-driven insights with medical knowledge. Emerging graph AI models integrate diverse data modalities through pretraining, facilitate interactive feedback loops, and foster human–AI collaboration, paving the way toward clinically meaningful predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI