人工神经网络
稳健性(进化)
高斯噪声
计算机科学
噪音(视频)
高斯分布
人工智能
机器学习
模式识别(心理学)
算法
生物化学
化学
物理
量子力学
图像(数学)
基因
作者
Jinwu Kang,Emily Zhao,Zekun Guo,Shibo Wang,Wenzhe Su,Xing Zhang
出处
期刊:Lecture notes in networks and systems
日期:2023-01-01
卷期号:: 396-409
标识
DOI:10.1007/978-3-031-43789-2_37
摘要
Because neural network is a data-driven black box model, people cannot directly understand its decision basis, and once the neural network is constructed with adversarial samples, it can lead to wrong conclusions with high confidence. Therefore, many researchers focus on the robustness of the neural networks. This paper mainly studies neural network defense based on random noise injection. In theory, injecting exponential family noise into any layer of neural network can ensure the robustness of neural network. But the experiment shows that the disturbance resistance effect varies greatly with different noise distribution. We investigate the robustness of neural networks for injection of exponential and Gaussian noise, and give the upper bound of Renyi divergence under these two types of noise. In terms of experiments, we uses CIFAR-10 dataset to conduct experiments on a variety of neural network structures. It is found that random noise injection can effectively reduce the attack effect of adversarial sample and make the neural network more robust. However, when the noise is too high, the classification accuracy of the neural network itself will decline. This paper proposes to add Gaussian noise with small variance to the image subject and Gaussian noise with large variance to the background, so as to achieve better defense effect.
科研通智能强力驱动
Strongly Powered by AbleSci AI