元学习(计算机科学)
计算机科学
初始化
人工智能
机器学习
梯度下降
适应(眼睛)
随机梯度下降算法
随机优化
最优化问题
数学优化
算法
人工神经网络
数学
工程类
心理学
系统工程
神经科学
程序设计语言
任务(项目管理)
作者
Pengyu Yuan,Hien Van Nguyen
出处
期刊:Elsevier eBooks
[Elsevier]
日期:2023-01-01
卷期号:: 53-64
标识
DOI:10.1016/b978-0-32-399851-2.00011-9
摘要
This chapter introduces the optimization-based approaches to meta learning, which model the inner loop of meta learning as solving an optimization problem. The central observation is that the vanilla stochastic gradient descent algorithm is unsuitable for optimizing learning models under data scarcity constraints. Optimization-based meta learning algorithms address this limitation by seeking effective update rules or initialization that allows efficient adaptation to novel tasks with few training samples. We also discuss the impact of inner loop optimization on meta learning performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI