可解释性
归属
图形
不确定度量化
特征(语言学)
计算机科学
人工智能
数据挖掘
人工神经网络
机器学习
极限(数学)
模式识别(心理学)
不确定度归约理论
采样(信号处理)
夏普里值
测量不确定度
噪声数据
作者
Leonid Komissarov,Nenad Manevski,Katrin Groebke Zbinden,Lisa Sach-Peltason
标识
DOI:10.1021/acs.jcim.5c01003
摘要
Graph Neural Networks (GNNs) are powerful tools for predicting chemical properties, but their black-box nature can limit trust and utility. Explainability through feature attribution and awareness of prediction uncertainty are critical for practical applications, for example in iterative lab-in-the-loop scenarios. We systematically evaluate different posthoc feature attribution methods and study their integration with uncertainty quantification in GNNs for chemistry. Our findings reveal a strong synergy: attributing uncertainty to specific input features (atoms or substructures) provides a granular understanding of model confidence and highlights potential data gaps or model limitations. We evaluated several attribution approaches on aqueous solubility and molecular weight prediction tasks, demonstrating that methods like Feature Ablation and Shapley Value Sampling can effectively identify molecular substructures driving prediction and its uncertainty. This combined approach significantly enhances the interpretability and actionable insights derived from chemical GNNs, facilitating the design of more useful models in research and development.
科研通智能强力驱动
Strongly Powered by AbleSci AI