计算机科学
人工智能
机器学习
任务(项目管理)
杠杆(统计)
人工神经网络
多任务学习
加权
钥匙(锁)
医学
计算机安全
管理
经济
放射科
作者
Álvaro S. Hervella,José Rouco,Jorge Novo,Marcos Ortega
标识
DOI:10.1016/j.neunet.2023.11.038
摘要
Multi-task learning is a promising paradigm to leverage task interrelations during the training of deep neural networks. A key challenge in the training of multi-task networks is to adequately balance the complementary supervisory signals of multiple tasks. In that regard, although several task-balancing approaches have been proposed, they are usually limited by the use of per-task weighting schemes and do not completely address the uneven contribution of the different tasks to the network training. In contrast to classical approaches, we propose a novel Multi-Adaptive Optimization (MAO) strategy that dynamically adjusts the contribution of each task to the training of each individual parameter in the network. This automatically produces a balanced learning across tasks and across parameters, throughout the whole training and for any number of tasks. To validate our proposal, we perform comparative experiments on real-world datasets for computer vision, considering different experimental settings. These experiments allow us to analyze the performance obtained in several multi-task scenarios along with the learning balance across tasks, network layers and training steps. The results demonstrate that MAO outperforms previous task-balancing alternatives. Additionally, the performed analyses provide insights that allow us to comprehend the advantages of this novel approach for multi-task learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI