概化理论
计算机科学
后验概率
人工智能
分类器(UML)
点估计
模式识别(心理学)
点(几何)
数据挖掘
算法
机器学习
统计
数学
贝叶斯概率
几何学
作者
Kay H. Brodersen,Cheng Soon Ong,Klaas Ε. Stephan,Joachim M. Buhmann
标识
DOI:10.1109/icpr.2010.764
摘要
Evaluating the performance of a classification algorithm critically requires a measure of the degree to which unseen examples have been identified with their correct class labels. In practice, generalizability is frequently estimated by averaging the accuracies obtained on individual cross-validation folds. This procedure, however, is problematic in two ways. First, it does not allow for the derivation of meaningful confidence intervals. Second, it leads to an optimistic estimate when a biased classifier is tested on an imbalanced dataset. We show that both problems can be overcome by replacing the conventional point estimate of accuracy by an estimate of the posterior distribution of the balanced accuracy.
科研通智能强力驱动
Strongly Powered by AbleSci AI