提前停车
参数化(大气建模)
维数(图论)
人工神经网络
计算机科学
简单(哲学)
最佳停车
过程(计算)
样品(材料)
光学(聚焦)
深度学习
人工智能
线性模型
深层神经网络
机器学习
数学
数学优化
物理
光学
哲学
操作系统
认识论
热力学
纯数学
辐射传输
量子力学
作者
Ruoqi Shen,Liyao Gao,Yuanlin Ma
出处
期刊:Cornell University - arXiv
日期:2022-02-20
标识
DOI:10.48550/arxiv.2202.09885
摘要
Early stopping is a simple and widely used method to prevent over-training neural networks. We develop theoretical results to reveal the relationship between the optimal early stopping time and model dimension as well as sample size of the dataset for certain linear models. Our results demonstrate two very different behaviors when the model dimension exceeds the number of features versus the opposite scenario. While most previous works on linear models focus on the latter setting, we observe that the dimension of the model often exceeds the number of features arising from data in common deep learning tasks and propose a model to study this setting. We demonstrate experimentally that our theoretical results on optimal early stopping time corresponds to the training process of deep neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI