计算机科学
压缩传感
MNIST数据库
加密
联合学习
信息泄露
推论
信息隐私
方案(数学)
数据挖掘
信息敏感性
计算机安全
机器学习
深度学习
人工智能
数学分析
数学
作者
Di Xiao,Jinkun Li,Min Li
出处
期刊:Communications in computer and information science
日期:2023-11-26
卷期号:: 325-339
标识
DOI:10.1007/978-981-99-8184-7_25
摘要
Federated learning is a new distributed learning framework with data privacy preserving in which multiple users collaboratively train models without sharing data. However, recent studies highlight potential privacy leakage through shared gradient information. Several defense strategies, including gradient information encryption and perturbation, have been suggested. But these strategies either involve high complexity or are susceptible to attacks. To counter these challenges, we propose to train on secure compressive measurements by compressed learning, thereby achieving local data privacy protection with slight performance degradation. A feasible method to boost performance in compressed learning is the joint optimization of the sampling matrix and the inference network during the training phase, but this may suffer from data reconstruction attacks again. Thus, we further incorporate a traditional lightweight encryption scheme to protect data privacy. Experiments conducted on MNIST and FMNIST datasets substantiate that our schemes achieve a satisfactory balance between privacy protection and model performance.
科研通智能强力驱动
Strongly Powered by AbleSci AI