人工神经网络
正规化(语言学)
一般化
计算机科学
参数化复杂度
归纳偏置
深层神经网络
维数(图论)
非线性系统
人工智能
功能(生物学)
人气
数学
算法
物理
纯数学
工程类
生物
数学分析
进化生物学
社会心理学
量子力学
多任务学习
系统工程
心理学
任务(项目管理)
作者
Weiyang Liu,Rongmei Lin,Zhen Liu,Li Xiong,Bernhard Schölkopf,Adrian Weller
出处
期刊:Cornell University - arXiv
日期:2021-01-01
被引量:11
标识
DOI:10.48550/arxiv.2103.01649
摘要
Due to the over-parameterization nature, neural networks are a powerful tool for nonlinear function approximation. In order to achieve good generalization on unseen data, a suitable inductive bias is of great importance for neural networks. One of the most straightforward ways is to regularize the neural network with some additional objectives. L2 regularization serves as a standard regularization for neural networks. Despite its popularity, it essentially regularizes one dimension of the individual neuron, which is not strong enough to control the capacity of highly over-parameterized neural networks. Motivated by this, hyperspherical uniformity is proposed as a novel family of relational regularizations that impact the interaction among neurons. We consider several geometrically distinct ways to achieve hyperspherical uniformity. The effectiveness of hyperspherical uniformity is justified by theoretical insights and empirical evaluations.
科研通智能强力驱动
Strongly Powered by AbleSci AI