计算机科学
稳健性(进化)
审计
计算机安全
联合学习
信息隐私
透明度(行为)
过程(计算)
医疗保健
人工智能
生物化学
化学
管理
经济
基因
经济增长
操作系统
作者
Abbas Yazdinejad,Hadis Karimipour,Gautam Srivastava
出处
期刊:IEEE Transactions on Consumer Electronics
[Institute of Electrical and Electronics Engineers]
日期:2023-01-01
卷期号:: 1-1
被引量:4
标识
DOI:10.1109/tce.2023.3318509
摘要
The growing application of machine learning (ML) techniques in healthcare has led to increased interest in federated learning (FL), which enables the secure and private training of robust ML models. However, conventional FL methods often fall short of providing adequate privacy protection and face challenges in handling non-independent and identically distributed (Non-IID) training data. These shortcomings are of significant concern when employing FL in electronic devices in healthcare. To address these issues, we propose an Auditable Privacy-Preserving Federated Learning (AP2FL) model tailored for electronics in healthcare settings. By leveraging Trusted Execution Environments (TEEs), AP2FL ensures secure training and aggregation processes on both client and server sides, effectively mitigating data leakage risks. To manage Non-IID data within the proposed framework, we incorporate the Active Personalized Federated Learning (ActPerFL) model and Batch Normalization (BN) techniques to consolidate user updates and identify data similarities. Additionally, we introduce an auditing mechanism in AP2FL that reveals the contribution of each client to the FL process, facilitating the updating of the global model following diverse data types and distributions. In other words, it ensures the FL process’s integrity, transparency, fairness, and robustness. Our results demonstrate that the proposed AP2FL model outperforms existing methods in accuracy and effectively eliminates privacy leakage.
科研通智能强力驱动
Strongly Powered by AbleSci AI