计算机科学
个性化
联合学习
可扩展性
机器学习
个性化学习
人工智能
变压器
万维网
数据库
教学方法
量子力学
开放式学习
物理
电压
合作学习
法学
政治学
作者
Hongxia Li,Zhongyi Cai,Jingya Wang,Jiangnan Tang,Weiping Ding,Chin‐Teng Lin,Ye Shi
标识
DOI:10.1109/tnnls.2023.3269062
摘要
Federated learning is an emerging learning paradigm where multiple clients collaboratively train a machine learning model in a privacy-preserving manner. Personalized federated learning extends this paradigm to overcome heterogeneity across clients by learning personalized models. Recently, there have been some initial attempts to apply transformers to federated learning. However, the impacts of federated learning algorithms on self-attention have not yet been studied. In this article, we investigate this relationship and reveal that federated averaging (FedAvg) algorithms actually have a negative impact on self-attention in cases of data heterogeneity, which limits the capabilities of the transformer model in federated learning settings. To address this issue, we propose FedTP, a novel transformer-based federated learning framework that learns personalized self-attention for each client while aggregating the other parameters among the clients. Instead of using a vanilla personalization mechanism that maintains personalized self-attention layers of each client locally, we develop a learn-to-personalize mechanism to further encourage the cooperation among clients and to increase the scalability and generalization of FedTP. Specifically, we achieve this by learning a hypernetwork on the server that outputs the personalized projection matrices of self-attention layers to generate clientwise queries, keys, and values. Furthermore, we present the generalization bound for FedTP with the learn-to-personalize mechanism. Extensive experiments verify that FedTP with the learn-to-personalize mechanism yields state-of-the-art performance in the non-IID scenarios. Our code is available online https://github.com/zhyczy/FedTP.
科研通智能强力驱动
Strongly Powered by AbleSci AI