李普希茨连续性
稳健性(进化)
可扩展性
计算机科学
人工神经网络
残余物
正多边形
透视图(图形)
数学优化
理论计算机科学
人工智能
数学
算法
纯数学
基因
数据库
生物化学
几何学
化学
作者
Laurent Meunier,Blaise Delattre,Alexandre Araujo,Alexandre Allauzen
出处
期刊:Cornell University - arXiv
日期:2021-01-01
标识
DOI:10.48550/arxiv.2110.12690
摘要
The Lipschitz constant of neural networks has been established as a key quantity to enforce the robustness to adversarial examples. In this paper, we tackle the problem of building $1$-Lipschitz Neural Networks. By studying Residual Networks from a continuous time dynamical system perspective, we provide a generic method to build $1$-Lipschitz Neural Networks and show that some previous approaches are special cases of this framework. Then, we extend this reasoning and show that ResNet flows derived from convex potentials define $1$-Lipschitz transformations, that lead us to define the {\em Convex Potential Layer} (CPL). A comprehensive set of experiments on several datasets demonstrates the scalability of our architecture and the benefits as an $\ell_2$-provable defense against adversarial examples.
科研通智能强力驱动
Strongly Powered by AbleSci AI