计算机科学
软件部署
可靠性(半导体)
机器学习
估计
任务(项目管理)
人工智能
构造(python库)
可信赖性
方案(数学)
数据挖掘
单眼
训练集
软件工程
量子力学
物理
数学分析
经济
功率(物理)
管理
程序设计语言
计算机安全
数学
作者
Haoxuan Qu,Yanchao Li,Lin Geng Foo,Jason Kuen,Jiuxiang Gu,Jun Liu
标识
DOI:10.1007/978-3-031-19812-0_23
摘要
AbstractConfidence estimation, a task that aims to evaluate the trustworthiness of the model’s prediction output during deployment, has received lots of research attention recently, due to its importance for the safe deployment of deep models. Previous works have outlined two important qualities that a reliable confidence estimation model should possess, i.e., the ability to perform well under label imbalance and the ability to handle various out-of-distribution data inputs. In this work, we propose a meta-learning framework that can simultaneously improve upon both qualities in a confidence estimation model. Specifically, we first construct virtual training and testing sets with some intentionally designed distribution differences between them. Our framework then uses the constructed sets to train the confidence estimation model through a virtual training and testing scheme leading it to learn knowledge that generalizes to diverse distributions. We show the effectiveness of our framework on both monocular depth estimation and image classification. KeywordsConfidence estimationMeta-learning
科研通智能强力驱动
Strongly Powered by AbleSci AI