计算机科学
架空(工程)
修剪
编码器
信息丢失
相关性
均方误差
方案(数学)
解码方法
人工智能
机器学习
数据挖掘
模式识别(心理学)
算法
数学分析
统计
几何学
数学
农学
生物
操作系统
作者
Xingming Luo,Yaochi Zhao,Zhuhua Hu,Yanfei Zhu,Jiezhuo Zhong
标识
DOI:10.1109/ijcnn54540.2023.10191055
摘要
Split Federated Learning (SFL) is the most recent distributed training scheme. Compared to Federated Learning, SFL reduces the overhead of client while achieving better privacy protection. However, SFL attackers can still use the intermediate activations of client to reconstruct the original data containing sensitive information. To defend against this reconstruction attack, we propose to use distance correlation loss to reduce the overall correlation between input data and intermediate activations, and we further construct an effective and efficient dynamic channel pruning network to automatically sense the sensitive channels in intermediate activation and thus selectively obfuscate sensitive information. On CIFAR-10, FairFace and HAM10000 datasets, we carry out Auto-encoder attack and Model Inversion (MI) attack to the model based on ResNet-18. The experimental results show that compared with the existing methods, our method obtains better defense performance at a very low utility loss, thus achieving better privacy utility tradeoff. On CIFAR-10 dataset, our method obtains 0.033 reconstruction MSE (Mean Square Error) for both Auto-encoder attack and MI attack, with 0.019 and 0.014 gains than the existing SFL, respectively. Additionally, our method achieves 74.41% accuracy, only with 1% less accuracy than the existing SFL.
科研通智能强力驱动
Strongly Powered by AbleSci AI