计算机科学
信息泄露
Guard(计算机科学)
服务器
上传
计算机安全
熵(时间箭头)
新闻聚合器
遮罩(插图)
过程(计算)
机器学习
人工智能
理论计算机科学
计算机网络
万维网
艺术
物理
量子力学
视觉艺术
程序设计语言
操作系统
作者
Shengnan Zhao,Qi Zhao,Chuan Zhao,Jiang Han,Qiuliang Xu
摘要
Private aggregation of teacher ensembles (PATE), a general machine learning framework based on knowledge distillation, can provide a privacy guarantee for training data sets. However, this framework poses a number of security risks. First, PATE mainly focuses on the privacy of teachers' training data and fails to protect the privacy of their students' data. Second, PATE relies heavily on a trusted aggregator to count teachers' votes, which is not convincing enough to assume a third party would never leak teachers' votes during the knowledge transfer process. To address the abovementioned issues, we improve the original PATE framework and present a new one that combines secret sharing with Intel Software Guard Extensions in a novel way. In the proposed framework, teachers are trained locally, then uploaded and stored in two computing servers in the form of secret shares. In the knowledge transfer phase, the two computing servers receive shares of private inputs from students before collaboratively performing secure predictions. Thus neither teachers nor students expose sensitive information. During the aggregation process, we propose an effective masking technique suitable for the setting to keep the prediction results private and prevent the votes from being leaked to the aggregation server. Besides, we optimize the aggregation mechanism and add noise perturbations adaptively based on the posterior entropy of the prediction results. Finally, we evaluate the performance of the new framework on multiple data sets and experimentally demonstrate that the new framework allows highly efficient, accurate, and secure predictions.
科研通智能强力驱动
Strongly Powered by AbleSci AI