计算机科学
同态加密
上传
明文
加密
班级(哲学)
信息隐私
计算机网络
独立同分布随机变量
数据挖掘
协议(科学)
计算机安全
人工智能
随机变量
操作系统
病理
统计
医学
替代医学
数学
作者
Xicong Shen,Ying Liu,Li Fu,Chunguang Li
标识
DOI:10.1109/jiot.2023.3288886
摘要
Federated learning (FL) has attracted widespread attention in the Internet of Things domain recently. With FL, multiple distributed devices can cooperatively train a global model by transmitting model updates without disclosing the original data. However, the distributed nature of FL makes it vulnerable to data poisoning attacks. In practice, malicious clients can launch the label-flipping attack (LFA) by simply tampering with the labels of local data, thus causing the global model to misclassify the samples of a selected class as the target class. Although some defense mechanisms have been proposed, they rely on specific assumptions about data distribution, and their performance degrades significantly when the data on clients are non-IID. Besides, most existing methods require clients to upload model updates in plaintext so that the server can identify and remove the malicious updates. But, direct transmission of model updates may still reveal private information. Considering these issues, we develop a label-flipping-robust and privacy-preserving FL (LFR-PPFL) algorithm, which is applicable to both independent and identically distributed (IID) and non-IID data. We first propose a detection method based on temporal analysis on cosine similarity to distinguish malicious clients from benign clients. Then, we propose a privacy-preserving computation protocol based on homomorphic encryption to implement this detection method and perform federated aggregation while protecting the privacy of clients. Besides, a detailed theoretical analysis is given to demonstrate the privacy guarantee of the proposed protocol. Experimental results on real-world data sets show that the proposed algorithm can effectively defend against LFAs under various data distributions.
科研通智能强力驱动
Strongly Powered by AbleSci AI