规范化(社会学)
计算机科学
人工神经网络
正确性
校准
缩放比例
深层神经网络
人工智能
机器学习
简单(哲学)
深度学习
模式识别(心理学)
数据挖掘
算法
数学
统计
哲学
社会学
认识论
人类学
几何学
作者
Chuan Guo,Geoff Pleiss,Yu Sun,Kilian Q. Weinberger
出处
期刊:International Conference on Machine Learning
日期:2017-07-17
卷期号:: 1321-1330
被引量:1315
摘要
Confidence calibration -- the problem of predicting probability estimates representative of the true correctness likelihood -- is important for classification models in many applications. We discover that modern neural networks, unlike those from a decade ago, are poorly calibrated. Through extensive experiments, we observe that depth, width, weight decay, and Batch Normalization are important factors influencing calibration. We evaluate the performance of various post-processing calibration methods on state-of-the-art architectures with image and document classification datasets. Our analysis and experiments not only offer insights into neural network learning, but also provide a simple and straightforward recipe for practical settings: on most datasets, temperature scaling -- a single-parameter variant of Platt Scaling -- is surprisingly effective at calibrating predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI