计算机科学
可验证秘密共享
同态加密
正确性
方案(数学)
Paillier密码体制
联合学习
数据共享
双线性插值
信息隐私
加密
秘密分享
架空(工程)
理论计算机科学
分布式计算
人工智能
计算机安全
密码学
算法
公钥密码术
集合(抽象数据类型)
病理
数学分析
操作系统
混合密码体制
程序设计语言
替代医学
数学
医学
计算机视觉
作者
Xianglong Zhang,Anmin Fu,Huaqun Wang,Chunyi Zhou,Zhenzhu Chen
标识
DOI:10.1109/icc40277.2020.9148628
摘要
Due to the complexity of the data environment, many organizations prefer to train deep learning models together by sharing training sets. However, this process is always accompanied by the restriction of distributed storage and privacy. Federated learning addresses this challenge by only sharing gradients with the server without revealing training sets. Unfortunately, existing research has shown that the server could extract information of the training sets from shared gradients. Besides, the server may falsify the calculated result to affect the accuracy of the trained model. To solve the above problems, we propose a privacy-preserving and verifiable federated learning scheme. Our scheme focuses on processing shared gradients by combining the Chinese Remainder Theorem and the Paillier homomorphic encryption, which can realize privacy-preserving federated learning with low computation and communication costs. In addition, we introduce the bilinear aggregate signature technology into federated learning, which effectively verifies the correctness of aggregated gradient. Moreover, the experiment shows that even with the added verification function, our scheme still has high accuracy and efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI