强化学习
黑匣子
钢筋
图形
计算机科学
人工智能
心理学
社会心理学
理论计算机科学
作者
Minjie Zhao,Jing Zhang
出处
期刊:Proceedings of the ... AAAI Conference on Artificial Intelligence
[Association for the Advancement of Artificial Intelligence (AAAI)]
日期:2025-04-11
卷期号:39 (12): 13357-13364
标识
DOI:10.1609/aaai.v39i12.33458
摘要
Recent studies have revealed the vulnerability of graph neural networks (GNNs) to adversarial attacks. In practice, effectively attacking GNNs is not easy. Existing attack methods primarily focus on modifying the topology of the graph data. In many scenarios, attackers do not have the authority to manipulate the graph's topology, making such attacks challenging to execute. Although node injection attacks are more feasible than modifying the topology, current injection attacks rely on knowledge of the victim model's architecture. This dependency significantly degrades attack quality when there is inconsistency in the victim models. Moreover, the generation of injected nodes often lacks precise control over features, making it difficult to balance attack effectiveness and stealthiness. In this paper, we investigate a node injection attack under model-agnostic conditions and propose Targeted Evasion Attack via Node Injection (TEANI). Specifically, TEANI models the generation of adversarial nodes as a Markov process. Without considering the target model's structure, it guides the agent to select features that maximize attack effectiveness within a budget, based solely on the results of queries to a black-box model. Extensive experiments on real-world datasets and mainstream GNN models demonstrate that the proposed TEANI poses more effective and imperceptible threats than state-of-the-art attack methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI