加权
任务(项目管理)
计算机科学
人工智能
常量(计算机编程)
过程(计算)
功能(生物学)
相关性(法律)
机器学习
工程类
放射科
系统工程
法学
程序设计语言
操作系统
生物
进化生物学
医学
政治学
作者
Sam Verboven,Muhammad Hafeez Chaudhary,Jeroen Berrevoets,Wouter Verbeke
出处
期刊:Cornell University - arXiv
日期:2020-01-01
标识
DOI:10.48550/arxiv.2008.11643
摘要
Multi-task learning (MTL) can improve performance on a task by sharing representations with one or more related auxiliary-tasks. Usually, MTL-networks are trained on a composite loss function formed by a constant weighted combination of the separate task losses. In practice, constant loss weights lead to poor results for two reasons: (i) the relevance of the auxiliary tasks can gradually drift throughout the learning process; (ii) for mini-batch based optimisation, the optimal task weights vary significantly from one update to the next depending on mini-batch sample composition. We introduce HydaLearn, an intelligent weighting algorithm that connects main-task gain to the individual task gradients, in order to inform dynamic loss weighting at the mini-batch level, addressing i and ii. Using HydaLearn, we report performance increases on synthetic data, as well as on two supervised learning domains.
科研通智能强力驱动
Strongly Powered by AbleSci AI