计算机科学
块链
单点故障
联合学习
方案(数学)
智能合约
架空(工程)
数学证明
计算机安全
分布式计算
操作系统
数学分析
几何学
数学
作者
Jinsheng Yang,Wenfeng Zhang,Zihao Guo,Zhen Gao
出处
期刊:Electronics
[MDPI AG]
日期:2023-12-24
卷期号:13 (1): 86-86
标识
DOI:10.3390/electronics13010086
摘要
Federated learning is a privacy-preserving machine learning framework where multiple data owners collaborate to train a global model under the orchestra of a central server. The local training results from trainers should be submitted to the central server for model aggregation and update. Busy central server and malicious trainers can introduce the issues of a single point of failure and model poisoning attacks. To address the above issues, the trusty decentralized federated learning (called TrustDFL) framework has been proposed in this paper based on the zero-knowledge proof scheme, blockchain, and smart contracts, which provides enhanced security and higher efficiency for model aggregation. Specifically, Groth 16 is applied to generate the proof for the local model training, including the forward and backward propagation processes. The proofs are attached as the payloads to the transactions, which are broadcast into the blockchain network and executed by the miners. With the support of smart contracts, the contributions of the trainers could be verified automatically under the economic incentive, where the blockchain records all exchanged data as the trust anchor in multi-party scenarios. In addition, IPFS (InterPlanetary File System) is introduced to alleviate the storage and communication overhead brought by local and global models. The theoretical analysis and estimation results show that the TrustDFL efficiently avoids model poisoning attacks without leaking the local secrets, ensuring the global model’s accuracy to be trained.
科研通智能强力驱动
Strongly Powered by AbleSci AI