后门
计算机科学
计算机安全
人工神经网络
节点(物理)
人工智能
工程类
结构工程
标识
DOI:10.1145/3576915.3624387
摘要
Recent research has indicated that Graph Neural Networks (GNNs) are vulnerable to backdoor attacks, and existing studies focus on the One-to-One attack where there is a single target triggered by a single backdoor. In this work, we explore two advanced backdoor attacks, i.e., the multi-target and multi-trigger backdoor attacks, on GNNs: 1) One-to-N attack, where there are multiple backdoor targets triggered by controlling different values of the trigger; 2) N-to-One attack, where the attack is only triggered when all the N triggers are present. The initial experimental results illustrate that both attacks can achieve a high attack success rate (up to 99.72%) on GNNs for the node classification task.
科研通智能强力驱动
Strongly Powered by AbleSci AI