计算机科学
利用
分类
任务(项目管理)
计算机安全
领域(数学)
多样性(控制论)
模糊测试
人工智能
领域(数学分析)
机器学习
培训(气象学)
素数(序理论)
数据科学
数学分析
物理
管理
数学
软件
组合数学
气象学
纯数学
经济
程序设计语言
作者
Xue Wang,Jingjing Ma,Xue Wang,Jiahui Hu,Zhan Qin,Kui Ren
摘要
Machine learning (ML) has been universally adopted for automated decisions in a variety of fields, including recognition and classification applications, recommendation systems, natural language processing, and so on. However, in light of high expenses on training data and computing resources, recent years have witnessed a rapid increase in outsourced ML training, either partially or completely, which provides vulnerabilities for adversaries to exploit. A prime threat in training phase is called poisoning attack, where adversaries strive to subvert the behavior of machine learning systems by poisoning training data or other means of interference. Although a growing number of relevant studies have been proposed, the research among poisoning attack is still overly scattered, with each paper focusing on a particular task in a specific domain. In this survey, we summarize and categorize existing attack methods and corresponding defenses, as well as demonstrate compelling application scenarios, thus providing a unified framework to analyze poisoning attacks. Besides, we also discuss the main limitations of current works, along with the corresponding future directions to facilitate further researches. Our ultimate motivation is to provide a comprehensive and self-contained survey of this growing field of research and lay the foundation for a more standardized approach to reproducible studies.
科研通智能强力驱动
Strongly Powered by AbleSci AI