稳健性(进化)
推论
估计员
计算机科学
人口
稳健统计
机器学习
一致性(知识库)
人工智能
数据点
分歧(语言学)
计量经济学
数据挖掘
数学
统计
离群值
社会学
哲学
人口学
基因
化学
生物化学
语言学
作者
Maxime Cauchois,Suyash Gupta,Alnur Ali,John C. Duchi
标识
DOI:10.1080/01621459.2023.2298037
摘要
While the traditional viewpoint in machine learning and statistics assumes training and testing samples come from the same population, practice belies this fiction. One strategy—coming from robust statistics and optimization—is thus to build a model robust to distributional perturbations. In this paper, we take a different approach to describe procedures for robust predictive inference, where a model provides uncertainty estimates on its predictions rather than point predictions. We present a method that produces prediction sets (almost exactly) giving the right coverage level for any test distribution in an f-divergence ball around the training population. The method, based on conformal inference, achieves (nearly) valid coverage in finite samples, under only the condition that the training data be exchangeable. An essential component of our methodology is to estimate the amount of expected future data shift and build robustness to it; we develop estimators and prove their consistency for protection and validity of uncertainty estimates under shifts. By experimenting on several large-scale benchmark datasets, including Recht et al.’s CIFAR-v4 and ImageNet-V2 datasets, we provide complementary empirical results that highlight the importance of robust predictive validity.
科研通智能强力驱动
Strongly Powered by AbleSci AI