公制(单位)
计算机科学
任务(项目管理)
缩放比例
简单(哲学)
集合(抽象数据类型)
人工智能
度量空间
空格(标点符号)
机器学习
数学
数学分析
哲学
几何学
经济
操作系统
认识论
管理
程序设计语言
运营管理
作者
Boris N. Oreshkin,Pau Rodríguez,Alexandre Lacoste
出处
期刊:Cornell University - arXiv
日期:2018-01-01
被引量:895
标识
DOI:10.48550/arxiv.1805.10123
摘要
Few-shot learning has become essential for producing models that generalize from few examples. In this work, we identify that metric scaling and metric task conditioning are important to improve the performance of few-shot algorithms. Our analysis reveals that simple metric scaling completely changes the nature of few-shot algorithm parameter updates. Metric scaling provides improvements up to 14% in accuracy for certain metrics on the mini-Imagenet 5-way 5-shot classification task. We further propose a simple and effective way of conditioning a learner on the task sample set, resulting in learning a task-dependent metric space. Moreover, we propose and empirically test a practical end-to-end optimization procedure based on auxiliary task co-training to learn a task-dependent metric space. The resulting few-shot learning model based on the task-dependent scaled metric achieves state of the art on mini-Imagenet. We confirm these results on another few-shot dataset that we introduce in this paper based on CIFAR100. Our code is publicly available at https://github.com/ElementAI/TADAM.
科研通智能强力驱动
Strongly Powered by AbleSci AI