剪裁(形态学)
计算机科学
随机梯度下降算法
人气
透视图(图形)
人工智能
梯度下降
机器学习
人工神经网络
心理学
语言学
社会心理学
哲学
作者
Guanbiao Lin,Hongyang Yan,Guang Kou,Teng Huang,Shiyu Peng,Yingying Zhang,Changyu Dong
摘要
Differentially Private Stochastic Gradient Descent (DP-SGD) is a prime method for training machine learning models with rigorous privacy guarantees. Since its birth, DP-SGD has gained popularity and has been widely adopted in both academic and industrial research. One well-known challenge when using DP-SGD is how to improve utility while maintaining privacy. To this end, recently we have seen several proposals that clip the gradients with adaptive thresholds rather than a fixed one. Although each proposal comes with some theoretical justification, the theories often rely on strong assumptions and are not compatible with each other. It is hard to know whether they are good in practice and how good they are. In this paper, we investigate adaptive clipping in DP-SGD from an empirical perspective. With extensive experiments, we were able to gain some fresh insights and proposed two new adaptive clipping strategies based on them. We cross-compared the existing methods and our new strategies experimentally. Results showed that our strategies did provide a substantial improvement in model accuracy, and outperformed the state-of-the-art adaptive clipping methods consistently.
科研通智能强力驱动
Strongly Powered by AbleSci AI