共谋
计算机科学
服务器
可验证秘密共享
正确性
计算机安全
分布式计算
计算机网络
算法
集合(抽象数据类型)
程序设计语言
经济
微观经济学
作者
Xianyu Mu,Youliang Tian,Zhou Zhou,Shuai Wang,Jinbo Xiong
标识
DOI:10.1109/jiot.2024.3390545
摘要
Using federated learning (FL) to train global models in IoT improves computational efficiency and protects users' data privacy. However, FL still faces privacy threats. Driven by interests, servers reduce their computational cost or induce wrong decisions in IoT devices by returning wrong global model gradients. Although the verifiability of aggregation results is achieved in previous research, it is difficult to defend against collusion attacks launched by servers and users. Therefore, we construct a rational verifiable federated learning secure aggregation protocol based on the dual-server framework and game theory, which achieves verifiable aggregation results, and effectively defends against collusion attacks. Firstly, we propose a new model verification code based on the property of irreversible matrices, which allows users to verify the correctness of the aggregation results by matrix products. This model validation code is computationally efficient and resistant to the adversary's backward inference. Secondly, we adopt a dual-server architecture and improve the prisoner contract and betrayal contract according to the actual application scenarios of IoT, converting the previous collusion attacks between servers and users to collusion attacks between servers and making the rational servers not launch collusion attacks to destroy the verification mechanism of the aggregation results through the incentive mechanism. Finally, we demonstrate through security analysis that RVFL is secure and effective against collusion and reverse inference attacks. In addition, we show through experimental results that RVFL can improve its efficiency by three orders of magnitude in the verification phase and 88% in the masking phase.
科研通智能强力驱动
Strongly Powered by AbleSci AI