MNIST数据库
计算机科学
机器学习
人工智能
再培训
加密
过程(计算)
渐进式学习
特征选择
数据挖掘
深度学习
计算机安全
操作系统
业务
国际贸易
作者
Weiwen Zhang,Ziyu Liu,Yifeng Jiang,Wuxing Chen,Bowen Zhao,Kaixiang Yang
标识
DOI:10.1016/j.neunet.2024.106436
摘要
Incremental learning algorithms have been developed as an efficient solution for fast remodeling in Broad Learning Systems (BLS) without a retraining process. Even though the structure and performance of broad learning are gradually showing superiority, private data leakage in broad learning systems is still a problem that needs to be solved. Recently, Multiparty Secure Broad Learning System (MSBLS) is proposed to allow two clients to participate training. However, privacy-preserving broad learning across multiple clients has received limited attention. In this paper, we propose a Self-Balancing Incremental Broad Learning System (SIBLS) with privacy protection by considering the effect of different data sample sizes from clients, which allows multiple clients to be involved in the incremental learning. Specifically, we design a client selection strategy to select two clients in each round by reducing the gap in the number of data samples in the incremental updating process. To ensure the security under the participation of multiple clients, we introduce a mediator in the data encryption and feature mapping process. Three classical datasets are used to validate the effectiveness of our proposed SIBLS, including MNIST, Fashion and NORB datasets. Experimental results show that our proposed SIBLS can have comparable performance with MSBLS while achieving better performance than federated learning in terms of accuracy and running time.
科研通智能强力驱动
Strongly Powered by AbleSci AI