可验证秘密共享
计算机科学
联合学习
理论计算机科学
计算机网络
计算机安全
分布式计算
程序设计语言
集合(抽象数据类型)
作者
Yijian Zhong,Wuzheng Tan,Zhifeng Xu,Shixin Chen,Jiasi Weng,Jian Weng
标识
DOI:10.1109/jiot.2024.3370938
摘要
Federated learning has shown great potential in Internet of Things (IoTs) for performing intelligent decision making. It allows IoT devices to collaboratively train a neural network upon the data they collect while separately keeping these data staying local. However, several research works have shown that such architecture still faces security challenges that adversaries could raise inference attack to the transferring model parameters to reveal data from devices. Moreover, another security risk in federated learning is that malicious devices may launch model pollution attack to reduce the quality of the aggregated model, or dishonest server may output incorrect aggregated result to the devices. Most existing privacy-preserving federated learning protocols could not deal with both problems. In this paper, we present WVFL, a secure weighted aggregation protocol in which aims to minimize the effect of wrong local models to the aggregated model, meanwhile allowing devices to verify the correctness of the aggregation result. All important intermediate values in the process are in encrypted form so that they would not be revealed to both devices and servers to guarantee privacy. At the end of this paper, we give implementation of our WVFL scheme, showing its efficiency compared with previous work.
科研通智能强力驱动
Strongly Powered by AbleSci AI