An efficient approach for privacy preserving decentralized deep learning models based on secure multi-party computation

联合学习 计算 加密
作者
Anh T. Tran,The-Dung Luong,Jessada Karnjana,Van-Nam Huynh
出处
期刊:Neurocomputing [Elsevier BV]
卷期号:422: 245-262 被引量:24
标识
DOI:10.1016/j.neucom.2020.10.014
摘要

This paper aims to develop a new efficient framework named Secure Decentralized Training Framework (SDTF) for Privacy Preserving Deep Learning models. The main feature of the proposed framework is its capable of working on a decentralized network setting that does not need a trusted third-party server while simultaneously ensuring the privacy of local data with a low cost of communication bandwidth. Particularly, we first propose a so-called Efficient Secure Sum Protocol (ESSP) that enables a large group of parties to jointly calculate a sum of private inputs. ESSP can work not only with integer number but also with floating point number without any data conversion. We then propose a Secure Model Sharing Protocol that enables a group of parties securely train and share the local models to be aggregated into a global model. Secure Model Sharing Protocol exploits randomization techniques and ESSP to protect local models from any honest-but-curious party even n-2 of n parties colluding. Eventually, these protocols are employed for collaborative training decentralized deep learning models. We conduct theoretical evaluation of privacy and communication cost as well as empirical experiments on balance class image datasets (MNIST) and an unbalance class text dataset (UCI SMS Spam). These experiments demonstrate the proposed approach can obtain high accuracy (i.e. 97% baseline accuracy in only 10 training rounds with MNIST, 100 training rounds with SMS Spam) and robust to the heterogeneity decentralized network, with non-IID and unbalance data distributions. We also show a reduction in required rounds of training to achieve the accuracy baseline by 5× as compared to Downpour SGD. It is shown that the proposed approach can achieve both the privacy at the level of cryptographic approaches and efficiency at the level of randomization techniques, while it also retains higher model’s utility than differential privacy approaches.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
科研通AI5应助顾瞻采纳,获得10
1秒前
4秒前
4秒前
轩辕自中完成签到,获得积分10
7秒前
9秒前
哈哈哈发布了新的文献求助10
10秒前
JamesPei应助科研通管家采纳,获得10
10秒前
科研通AI5应助科研通管家采纳,获得10
10秒前
NexusExplorer应助科研通管家采纳,获得10
10秒前
10秒前
阿飘应助科研通管家采纳,获得10
11秒前
研友_VZG7GZ应助科研通管家采纳,获得10
11秒前
JamesPei应助科研通管家采纳,获得10
11秒前
Rage_Wang应助科研通管家采纳,获得50
11秒前
Jasper应助科研通管家采纳,获得10
11秒前
11秒前
11秒前
hehehehe完成签到,获得积分10
11秒前
11秒前
碎碎念s完成签到,获得积分10
12秒前
12秒前
13秒前
CipherSage应助幸福的雪枫采纳,获得10
15秒前
15秒前
16秒前
月亮发布了新的文献求助10
17秒前
18秒前
小费发布了新的文献求助50
21秒前
霍师傅发布了新的文献求助10
22秒前
无花果应助小博士328采纳,获得10
24秒前
远方发布了新的文献求助10
24秒前
lalala发布了新的文献求助10
25秒前
26秒前
27秒前
Ava应助Ryan采纳,获得10
27秒前
Hysen_L完成签到,获得积分10
28秒前
yhao完成签到,获得积分10
29秒前
平常的毛豆应助小费采纳,获得30
31秒前
平常的毛豆应助小费采纳,获得30
31秒前
小博士328完成签到,获得积分10
33秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Mixing the elements of mass customisation 300
the MD Anderson Surgical Oncology Manual, Seventh Edition 300
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3778058
求助须知:如何正确求助?哪些是违规求助? 3323749
关于积分的说明 10215625
捐赠科研通 3038921
什么是DOI,文献DOI怎么找? 1667711
邀请新用户注册赠送积分活动 798361
科研通“疑难数据库(出版商)”最低求助积分说明 758339