反事实思维
计算机科学
脆弱性(计算)
图形
图论
人工神经网络
理论计算机科学
人工智能
计算机安全
数学
心理学
组合数学
社会心理学
作者
Zhaoyang Chu,Yao Wan,Qian Li,Yang Wu,Hongyu Zhang,Yulei Sui,Guandong Xu,Hai Jin
标识
DOI:10.1145/3650212.3652136
摘要
Vulnerability detection is crucial for ensuring the security and reliability of software systems.Recently, Graph Neural Networks (GNNs) have emerged as a prominent code embedding approach for vulnerability detection, owing to their ability to capture the underlying semantic structure of source code.However, GNNs face significant challenges in explainability due to their inherently black-box nature.To this end, several factual reasoning-based explainers have been proposed.These explainers provide explanations for the predictions made by GNNs by analyzing the key features that contribute to the outcomes.We argue that these factual reasoning-based explanations cannot answer critical what-if questions: "What would happen to the GNN's decision if we were to alter the code graph into alternative structures?"Inspired by advancements of counterfactual reasoning in artificial intelligence, we propose CFExplainer, a novel counterfactual explainer for GNN-based vulnerability detection.Unlike factual reasoning-based explainers, CFExplainer seeks the minimal perturbation to the input code graph that leads to a change in the prediction, thereby addressing the what-if questions
科研通智能强力驱动
Strongly Powered by AbleSci AI