可信赖性
计算机科学
图形
人工神经网络
人工智能
数据科学
理论计算机科学
计算机安全
作者
He Zhang,Bang Ye Wu,Xingliang Yuan,Shirui Pan,Hanghang Tong,Jian Pei
出处
期刊:Proceedings of the IEEE
[Institute of Electrical and Electronics Engineers]
日期:2024-02-01
卷期号:112 (2): 97-139
被引量:2
标识
DOI:10.1109/jproc.2024.3369017
摘要
Graph neural networks (GNNs) have emerged as a series of competent graph learning methods for diverse real-world scenarios, ranging from daily applications such as recommendation systems and question answering to cutting-edge technologies such as drug discovery in life sciences and n-body simulation in astrophysics. However, task performance is not the only requirement for GNNs. Performance-oriented GNNs have exhibited potential adverse effects, such as vulnerability to adversarial attacks, unexplainable discrimination against disadvantaged groups, or excessive resource consumption in edge computing environments. To avoid these unintentional harms, it is necessary to build competent GNNs characterized by trustworthiness. To this end, we propose a comprehensive roadmap to build trustworthy GNNs from the view of the various computing technologies involved. In this survey, we introduce basic concepts and comprehensively summarize existing efforts for trustworthy GNNs from six aspects, including robustness, explainability, privacy, fairness, accountability, and environmental well-being. In addition, we highlight the intricate cross-aspect relations between the above six aspects of trustworthy GNNs. Finally, we present a thorough overview of trending directions for facilitating the research and industrialization of trustworthy GNNs.
科研通智能强力驱动
Strongly Powered by AbleSci AI