对抗制
二元分类
有界函数
计算机科学
估计员
人工智能
二进制数
训练集
机器学习
数学
数学优化
算法
统计
支持向量机
数学分析
算术
作者
Hossein Taheri,Ramtin Pedarsani,Christos Thrampoulidis
标识
DOI:10.1109/isit50566.2022.9834717
摘要
Adversarial training using empirical risk minimization is the state-of-the-art method for defense against adversarial attacks, that is against small additive adversarial perturbations applied to test data leading to misclassification. Despite being successful in practice, understanding generalization properties of adversarial training in classification remains widely open. In this paper, we take the first step in this direction by precisely characterizing the robustness of adversarial training in binary linear classification. Specifically, we consider the high-dimensional regime where the model dimension grows with the size of the training set at a constant ratio. Our results provide exact asymptotics for both standard and adversarial test errors under ℓ ∞ -norm bounded perturbations in a generative Gaussian-mixture model. We use our sharp error formulae to explain how the adversarial and standard errors depend upon the overparameterization ratio, the data model, and the attack budget. Finally, by comparing with the robust Bayes estimator, our sharp asymptotics allow us to study fundamental limits of adversarial training.
科研通智能强力驱动
Strongly Powered by AbleSci AI