过度拟合
Boosting(机器学习)
提前停车
人工智能
阿达布思
梯度升压
机器学习
估计员
数学
支持向量机
交叉验证
泛化误差
计算机科学
集成学习
正规化(语言学)
估计量的偏差
随机森林
算法
统计
最小方差无偏估计量
人工神经网络
作者
Benyamin Ghojogh,Mark Crowley
出处
期刊:Cornell University - arXiv
日期:2019-05-28
被引量:10
摘要
In this tutorial paper, we first define mean squared error, variance, covariance, and bias of both random variables and classification/predictor models. Then, we formulate the true and generalization errors of the model for both training and validation/test instances where we make use of the Stein's Unbiased Risk Estimator (SURE). We define overfitting, underfitting, and generalization using the obtained true and generalization errors. We introduce cross validation and two well-known examples which are $K$-fold and leave-one-out cross validations. We briefly introduce generalized cross validation and then move on to regularization where we use the SURE again. We work on both $\ell_2$ and $\ell_1$ norm regularizations. Then, we show that bootstrap aggregating (bagging) reduces the variance of estimation. Boosting, specifically AdaBoost, is introduced and it is explained as both an additive model and a maximum margin model, i.e., Support Vector Machine (SVM). The upper bound on the generalization error of boosting is also provided to show why boosting prevents from overfitting. As examples of regularization, the theory of ridge and lasso regressions, weight decay, noise injection to input/weights, and early stopping are explained. Random forest, dropout, histogram of oriented gradients, and single shot multi-box detector are explained as examples of bagging in machine learning and computer vision. Finally, boosting tree and SVM models are mentioned as examples of boosting.
科研通智能强力驱动
Strongly Powered by AbleSci AI