计算机科学
判别式
对抗制
系列(地层学)
深层神经网络
黑匣子
脆弱性(计算)
人工智能
航程(航空)
差异进化
算法
模式识别(心理学)
人工神经网络
机器学习
计算机安全
古生物学
材料科学
复合材料
生物
作者
Wenbo Yang,Jidong Yuan,Xiaokang Wang,Peixiang Zhao
标识
DOI:10.1016/j.engappai.2022.105218
摘要
Deep neural networks (DNNs) for time series classification have potential security concerns due to their vulnerability to adversarial attacks. Previous work that perturbs time series globally requires gradient information to generate adversarial examples, leading to being perceived easily. In this paper, we propose a gradient-free black-box method called TSadv to attack DNNs with local perturbations. First, we formalize the attack as a constrained optimization problem solved by a differential evolution algorithm without any inner information of the target model. Second, with the assumption that time series shapelets provide more discriminative information between different classes, the range of perturbations is designed based on their intervals. Experimental results show that our method can effectively attack DNNs on time series datasets that have potential security concerns and generate imperceptible adversarial samples flexibly. Besides, our approach decreases the mean squared error by approximately two orders of magnitude compared with the state-of-the-art method while retaining competitive attacking success rates.
科研通智能强力驱动
Strongly Powered by AbleSci AI