图像(数学)
计算机科学
深度学习
互联网隐私
人工智能
数据科学
计算机安全
作者
Wisam Abbasi,Paolo Mori,Andrea Saracino
标识
DOI:10.1109/tdsc.2024.3400608
摘要
This paper proposes a novel approach for multi-party collaborative data analysis problems, where analysis accuracy and divergence are required, as well as both privacy of shared data and explainability of results. The proposed approach aims at trading-off data privacy, decision explainability, and data utility by analytically relating these three measures, evaluating how they impact each other, and proposing a methodology to find the best possible trade-off among them. In particular, given a set of requirements from the participants for a collaborative analysis problem, we propose a method to properly tune the parameters of privacy-preserving mechanisms and explainability techniques to be adopted by all participants, obtaining the best trade-off . The paper is focused on deep learning-based image data analysis problems, though the approach can be generalized to other data types. The $(\epsilon , \delta )$ - Differential Privacy and the Autoencoders privacy-preserving techniques have been adopted to preserve data privacy, while the SmoothGrad mechanism has been used to provide decision explainability. The proposed methodology has been validated with a set of experiments on three multi-class deep learning classifiers and three well-known image datasets, MNIST, FER, and CIFAR-10.
科研通智能强力驱动
Strongly Powered by AbleSci AI