多任务学习
计算机科学
正规化(语言学)
人工智能
机器学习
一般化
任务(项目管理)
半监督学习
任务分析
离群值
数学
数学分析
经济
管理
作者
Yu Zhang,Dit‐Yan Yeung
出处
期刊:Cornell University - arXiv
日期:2010-07-08
卷期号:: 733-742
被引量:343
摘要
Multi-task learning is a learning paradigm which seeks to improve the generalization performance of a learning task with the help of some other related tasks. In this paper, we propose a regularization formulation for learning the relationships between tasks in multi-task learning. This formulation can be viewed as a novel generalization of the regularization framework for single-task learning. Besides modeling positive task correlation, our method, called multi-task relationship learning (MTRL), can also describe negative task correlation and identify outlier tasks based on the same underlying principle. Under this regularization framework, the objective function of MTRL is convex. For efficiency, we use an alternating method to learn the optimal model parameters for each task as well as the relationships between tasks. We study MTRL in the symmetric multi-task learning setting and then generalize it to the asymmetric setting as well. We also study the relationships between MTRL and some existing multi-task learning methods. Experiments conducted on a toy problem as well as several benchmark data sets demonstrate the effectiveness of MTRL.
科研通智能强力驱动
Strongly Powered by AbleSci AI