对抗制
深度学习
可解释性
人工智能
计算机科学
软件部署
深层神经网络
数据科学
机器学习
软件工程
作者
Wei Jiang,Zhiyuan He,Jinyu Zhan,Pan Wei-jia,Deepak Adhikari
出处
期刊:ACM Transactions on Cyber-Physical Systems
[Association for Computing Machinery]
日期:2021-09-22
卷期号:5 (4): 1-25
被引量:5
摘要
Great progress has been made in deep learning over the past few years, which drives the deployment of deep learning–based applications into cyber-physical systems. But the lack of interpretability for deep learning models has led to potential security holes. Recent research has found that deep neural networks are vulnerable to well-designed input examples, called adversarial examples . Such examples are often too small to detect, but they completely fool deep learning models. In practice, adversarial attacks pose a serious threat to the success of deep learning. With the continuous development of deep learning applications, adversarial examples for different fields have also received attention. In this article, we summarize the methods of generating adversarial examples in computer vision, speech recognition, and natural language processing and study the applications of adversarial examples. We also explore emerging research and open problems.
科研通智能强力驱动
Strongly Powered by AbleSci AI