计算机科学
开放式研究
钥匙(锁)
相关性(法律)
可信赖性
数据科学
计算机安全
万维网
政治学
法学
作者
Enrique Tomás Martínez Beltrán,Mario Quiles Pérez,Pedro Miguel Sánchez Sánchez,Sergio López Bernal,Gérôme Bovet,Manuel Gil Pérez,Gregorio Martínez Pérez,Alberto Huertas Celdrán
标识
DOI:10.1109/comst.2023.3315746
摘要
In recent years, Federated Learning (FL) has gained relevance in training collaborative models without sharing sensitive data. Since its birth, Centralized FL (CFL) has been the most common approach in the literature, where a central entity creates a global model. However, a centralized approach leads to increased latency due to bottlenecks, heightened vulnerability to system failures, and trustworthiness concerns affecting the entity responsible for the global model creation. Decentralized Federated Learning (DFL) emerged to address these concerns by promoting decentralized model aggregation and minimizing reliance on centralized architectures. However, despite the work done in DFL, the literature has not (i) studied the main aspects differentiating DFL and CFL; (ii) analyzed DFL frameworks to create and evaluate new solutions; and (iii) reviewed application scenarios using DFL. Thus, this article identifies and analyzes the main fundamentals of DFL in terms of federation architectures, topologies, communication mechanisms, security approaches, and key performance indicators. Additionally, the paper at hand explores existing mechanisms to optimize critical DFL fundamentals. Then, the most relevant features of the current DFL frameworks are reviewed and compared. After that, it analyzes the most used DFL application scenarios, identifying solutions based on the fundamentals and frameworks previously defined. Finally, the evolution of existing DFL solutions is studied to provide a list of trends, lessons learned, and open challenges.
科研通智能强力驱动
Strongly Powered by AbleSci AI