计算机科学
可用的
推论
卷积神经网络
加密
深度学习
人工智能
机器学习
信息隐私
端到端原则
计算机安全
数据挖掘
万维网
作者
Georgios Kaissis,Alexander Ziller,Jonathan Passerat‐Palmbach,Théo Ryffel,Dmitrii Usynin,Andrew Trask,Ionésio Da Lima,Jason Mancuso,Friederike Jungmann,M. Steinborn,Andreas Saleh,Marcus R. Makowski,Daniel Rueckert,Rickmer Braren
标识
DOI:10.1038/s42256-021-00337-8
摘要
Using large, multi-national datasets for high-performance medical imaging AI systems requires innovation in privacy-preserving machine learning so models can train on sensitive data without requiring data transfer. Here we present PriMIA (Privacy-preserving Medical Image Analysis), a free, open-source software framework for differentially private, securely aggregated federated learning and encrypted inference on medical imaging data. We test PriMIA using a real-life case study in which an expert-level deep convolutional neural network classifies paediatric chest X-rays; the resulting model’s classification performance is on par with locally, non-securely trained models. We theoretically and empirically evaluate our framework’s performance and privacy guarantees, and demonstrate that the protections provided prevent the reconstruction of usable data by a gradient-based model inversion attack. Finally, we successfully employ the trained model in an end-to-end encrypted remote inference scenario using secure multi-party computation to prevent the disclosure of the data and the model. Gaining access to medical data to train AI applications can present problems due to patient privacy or proprietary interests. A way forward can be privacy-preserving federated learning schemes. Kaissis, Ziller and colleagues demonstrate here their open source framework for privacy-preserving medical image analysis in a remote inference scenario.
科研通智能强力驱动
Strongly Powered by AbleSci AI