对抗制
计算机科学
图形
理论计算机科学
稳健性(进化)
人工智能
数据科学
机器学习
生物化学
基因
化学
作者
Lichao Sun,Yingtong Dou,Carl Yang,Kai Zhang,Ji Wang,Philip S. Yu,Lifang He,Bo Li
标识
DOI:10.1109/tkde.2022.3201243
摘要
Deep neural networks (DNNs) have been widely applied to various applications,\nincluding image classification, text generation, audio recognition, and graph\ndata analysis. However, recent studies have shown that DNNs are vulnerable to\nadversarial attacks. Though there are several works about adversarial attack\nand defense strategies on domains such as images and natural language\nprocessing, it is still difficult to directly transfer the learned knowledge to\ngraph data due to its representation structure. Given the importance of graph\nanalysis, an increasing number of studies over the past few years have\nattempted to analyze the robustness of machine learning models on graph data.\nNevertheless, existing research considering adversarial behaviors on graph data\noften focuses on specific types of attacks with certain assumptions. In\naddition, each work proposes its own mathematical formulation, which makes the\ncomparison among different methods difficult. Therefore, this review is\nintended to provide an overall landscape of more than 100 papers on adversarial\nattack and defense strategies for graph data, and establish a unified\nformulation encompassing most graph adversarial learning models. Moreover, we\nalso compare different graph attacks and defenses along with their\ncontributions and limitations, as well as summarize the evaluation metrics,\ndatasets and future trends. We hope this survey can help fill the gap in the\nliterature and facilitate further development of this promising new field.\n
科研通智能强力驱动
Strongly Powered by AbleSci AI