后门
稳健性(进化)
计算机科学
计算机安全
实时计算
分布式计算
计算机视觉
人工智能
生物化学
基因
化学
作者
Yaguan Qian,Boyuan Ji,Zejie Lian,Renhui Tao,Ying Kong,Bin Wang,Wei Wang
标识
DOI:10.1177/0926227x251334926
摘要
Deep neural networks (DNNs) find extensive applications, including object detection in various security domains. However, these DNN models are susceptible to backdoor attacks. While significant research has been conducted on backdoor attacks in classified models, limited attention has been given to object detection models. Previous studies have predominantly focused on backdoor attacks in digital environments, overlooking real-world implications. Notably, the efficacy of backdoor attacks in real-world scenarios can be significantly influenced by physical factors such as distance and illumination. In this article, we introduce a variable-size backdoor trigger designed to accommodate objects of different sizes, mitigating disruptions arising from varying distances between the viewing point and the targeted object. Additionally, we propose malicious adversarial training for backdoor training, enabling the backdoor object detector to learn trigger features amidst physical noise. Experimental results demonstrate that our robust backdoor attack (RBA) enhances the success rate of attacks in real-world settings.
科研通智能强力驱动
Strongly Powered by AbleSci AI