计算机科学
成像体模
欺骗攻击
计算机视觉
人工智能
计算机安全
物理
光学
作者
Feng Lin,Hao Yan,Li Jin,Ziwei Liu,Li Lü,Zhongjie Ba,Kui Ren
标识
DOI:10.1109/tifs.2024.3376192
摘要
Despite their prevalence and indispensability in the perception modules of autonomous vehicles, cameras have shown susceptibility to numerous attacks. Among them, the phantom spoofing attack is of significant concern. In such attacks, malefactors employ electronic display devices like projectors and display monitors to generate deceptive objects, thereby duping the object detectors in autonomous vehicles. However, existing detection methodologies are narrowly focused on a single device category, ignoring the multitude of devices that could be leveraged for attacks. Furthermore, the artificial modality-based solution presently in use lacks efficacious fusion mechanisms. In response to these limitations, we propose PhaDe, a practical deep learning-based system adept at detecting phantom spoofing attacks from a variety of and even unfamiliar attack devices. Our approach introduces two image processing techniques to construct artificial modalities and further advances a multi-head self-attention MSA-based fusion module for more versatile integration of disparate modalities. To boost the generalization capacity of our system against novel, unseen attacks, we incorporate two representation-level losses to align feature distributions from various domains. Evaluations conducted on our own dataset, encompassing fake objects from several device types, attest to the efficacy of our system. Our results indicate an accuracy of 98.80% on familiar domains and a detection success rate of 94.03% on unfamiliar domains. Additionally, PhaDe demonstrates a swift response time, fulfilling the practicality requisites.
科研通智能强力驱动
Strongly Powered by AbleSci AI