修补
计算机科学
像素
对抗制
利用
人工智能
深度学习
图像(数学)
人工神经网络
感知
多样性(控制论)
计算机视觉
机器学习
模式识别(心理学)
计算机工程
计算机安全
生物
神经科学
作者
Giorgio Ughini,Stefano Samele,Matteo Matteucci
标识
DOI:10.1109/ijcnn55064.2022.9892818
摘要
Deep Learning systems, able to achieve significant breakthroughs in many fields, including computer vision and speech recognition, are not inherently secure. Adversarial attacks on computer vision models can craft slightly perturbed inputs that exploit the models' multi-dimensional boundary shape to dramatically reduce their performance without compromising the perception that human beings have of such input. In this work, we present Trust-No-Pixel, a novel plug-and-play strategy to harden neural network image classifiers from adversarial attacks, based on a massive inpainting strategy. The inpainting technique of our defense performs a total erase of the input image and its reconstruction from scratch. Our experiments show Trust-No-Pixel improved accuracy against the more challenging type of such attacks, namely the white box adversarial attacks. Moreover, an exhaustive comparison of our technique against state-of-the-art approaches taken from academic literature confirmed the solid defense performances of Trust-No-Pixel under a wide variety of scenarios, including different attacks and attacked network architectures.
科研通智能强力驱动
Strongly Powered by AbleSci AI