对抗制
计算机科学
人工智能
脆弱性(计算)
深层神经网络
计算机安全
深度学习
预处理器
机器学习
正规化(语言学)
作者
Kaibo Wang,Yanjiao Chen,Wenyuan Xu
标识
DOI:10.1007/978-3-031-28990-3_9
摘要
As deep learning-based computer vision is widely used in IoT devices, it is especially critical to ensure its security. Among the attacks against deep neural networks, adversarial attacks are a stealthy means of attack, which can mislead model decisions during the testing phase. Therefore, the exploration of adversarial attacks can help to understand the vulnerability of models in advance and make targeted defense. Existing unrestricted adversarial attacks beyond the $$\ell _p$$ norm often require additional models to be both adversarial and imperceptible, which leads to a high computational cost and task-specific design. Inspired by the observation that models exhibit unexpected vulnerability to changes in illumination, we develop Adversarial Illumination Attack (AIA), an unrestricted adversarial attack that imposes large but imperceptible alterations to the image. The core of the attack lies in simulating adversarial illumination through Planckian jitter, of which the effectiveness comes from a causal chain where the attacker misleads the model by manipulating the confusion factor. We propose an efficient approach to generate adversarial samples without additional models by image gradient regularization. We validate the effectiveness of adversarial illumination in the face of black-box models, data preprocessing, and adversarially trained models through extensive experiments. Experiment results confirm that AIA can be both a lightweight unrestricted attack and a plug-in to boost the effectiveness of other attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI