计算机科学
可验证秘密共享
深度学习
计算机安全
联合学习
人工智能
信息隐私
分布式计算
机器学习
集合(抽象数据类型)
程序设计语言
作者
Jiaqi Zhao,Hui Zhu,Fengwei Wang,Rongxing Lu,Zhe Liu,Hui Li
标识
DOI:10.1109/tifs.2022.3176191
摘要
Over the past years, the increasingly severe data island problem has spawned an emerging distributed deep learning framework—federated learning, in which the global model can be constructed over multiple participants without directly sharing their raw data. Despite its promising prospect, there are still many security challenges in federated learning, such as privacy preservation and integrity verification. Furthermore, federated learning is usually performed with the assistance of a center, which is prone to cause trust worries and communicational bottlenecks. To tackle these challenges, in this paper, we propose a privacy-preserving and verifiable decentralized federated learning framework, named PVD-FL, which can achieve secure deep learning model training under a decentralized architecture. Specifically, we first design an efficient and verifiable cipher-based matrix multiplication (EVCM) algorithm to execute the most basic calculation in deep learning. Then, by employing EVCM, we design a suite of decentralized algorithms to construct the PVD-FL framework, which ensures the confidentiality of both global model and local update and the verification of every training step. Detailed security analysis shows that PVD-FL can well protect privacy against various inference attacks and guarantee training integrity. In addition, the extensive experiments on real-world datasets also demonstrate that PVD-FL can achieve lossless accuracy and practical performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI