学习迁移
正规化(语言学)
计算机科学
泛化误差
一般化
理论(学习稳定性)
负迁移
人工智能
趋同(经济学)
领域(数学分析)
传输(计算)
机器学习
极限(数学)
集合(抽象数据类型)
班级(哲学)
算法
数学
经济
第一语言
并行计算
哲学
程序设计语言
数学分析
经济增长
语言学
作者
Ilja Kuzborskij,Francesco Orabona
出处
期刊:International Conference on Machine Learning
日期:2013-06-16
卷期号:: 942-950
被引量:61
摘要
We consider the transfer learning scenario, where the learner does not have access to the source domain directly, but rather operates on the basis of hypotheses induced from it - the Hypothesis Transfer Learning (HTL) problem. Particularly, we conduct a theoretical analysis of HTL by considering the algorithmic stability of a class of HTL algorithms based on Regularized Least Squares with biased regularization. We show that the relatedness of source and target domains accelerates the convergence of the Leave-One-Out error to the generalization error, thus enabling the use of the Leave-One-Out error to find the optimal transfer parameters, even in the presence of a small training set. In case of unrelated domains we also suggest a theoretically principled way to prevent negative transfer, so that in the limit we recover the performance of the algorithm not using any knowledge from the source domain.
科研通智能强力驱动
Strongly Powered by AbleSci AI