计算机科学
个性化学习
知识管理
合作学习
数学教育
开放式学习
心理学
教学方法
作者
Shuangtong Li,Tianyi Zhou,Xinmei Tian,Dacheng Tao
标识
DOI:10.1109/cvpr52688.2022.00954
摘要
Learning personalized models for user-customized computer-vision tasks is challenging due to the limited private-data and computation available on each edge device. Decentralized learning (DL) can exploit the images distributed over devices on a network topology to train a global model but is not designed to train personalized models for different tasks or optimize the topology. Moreover, the mixing weights used to aggregate neighbors' gradient messages in DL can be suboptimal for personalization since they are not adaptive to different nodes/tasks and learning stages. In this paper, we dynamically update the mixing-weights to improve the personalized model for each node's task and meanwhile learn a sparse topology to reduce communication costs. Our first approach, "learning to collaborate (L2C) ", directly optimizes the mixing weights to minimize the local validation loss per node for a predefined set of nodes/tasks. In order to produce mixing weights for new nodes or tasks, we further develop "meta-L2C', which learns an attention mechanism to automatically assign mixing weights by comparing two nodes' model updates. We evaluate both methods on diverse benchmarks and experimental settings for image classification. Thorough comparisons to both classical and recent methods for IID/non-IID decentralized and federated learning demonstrate our method's advantages in identifying collaborators among nodes, learning sparse topology, and producing better personalized models with low communication and computational cost.
科研通智能强力驱动
Strongly Powered by AbleSci AI