反事实思维
计算机科学
人工智能
背景(考古学)
发电机(电路理论)
图形
机器学习
强化学习
财产(哲学)
理论计算机科学
心理学
生物
认识论
物理
哲学
古生物学
社会心理学
功率(物理)
量子力学
作者
Danilo Numeroso,Davide Bacciu
标识
DOI:10.1109/ijcnn52387.2021.9534266
摘要
Explainable AI (XAI) is a research area whose objective is to increase trustworthiness and to enlighten the hidden mechanism of opaque machine learning techniques. This becomes increasingly important in case such models are applied to the chemistry domain, for its potential impact on humans' health, e.g. toxicity analysis in pharmacology. In this paper, we present a novel approach to tackle explainability of deep graph networks in the context of molecule property prediction t asks, named MEG (Molecular Explanation Generator). We generate informative counterfactual explanations for a specific prediction under the form of (valid) compounds with high structural similarity and different predicted properties. Given a trained DGN, we train a reinforcement learning based generator to output counterfactual explanations. At each step, MEG feeds the current candidate counterfactual into the DGN, collects the prediction and uses it to reward the RL agent to guide the exploration. Furthermore, we restrict the action space of the agent in order to only keep actions that maintain the molecule in a valid state. We discuss the results showing how the model can convey non-ML experts with key insights into the learning model focus in the neighbourhood of a molecule.
科研通智能强力驱动
Strongly Powered by AbleSci AI