水印
数字水印
稳健性(进化)
计算机科学
人工神经网络
修剪
人工智能
方案(数学)
机器学习
过程(计算)
灵敏度(控制系统)
知识产权
深度学习
数据挖掘
嵌入
工程类
图像(数学)
数学
农学
数学分析
化学
操作系统
基因
生物
生物化学
电子工程
标识
DOI:10.1145/3578741.3578832
摘要
In recent years, due to the rapid development of information technology, machine learning is widely used in various fields. Training deep neural network models is a very expensive process, which requires a lot of training data and hardware resources. Therefore, DNN models can be considered the intellectual property rights of model owners and need to be protected. More and more watermarking algorithms have been studied to embed into neural network models to protect the ownership of the models. At the same time, to test the robustness of the watermark, watermarking attack algorithms have emerged. In this paper, we firstly find the unexpected sensitivity of watermarked models, that is, they are more susceptible to adversarial disturbances than unwatermarked models, and then propose a model repair method based on neural network model pruning. By pruning some sensitive neurons to remove the watermark, the success rate of the watermark can be reduced to a certain extent, and on this basis, it verifies that it can effectively avoid model ownership detection.
科研通智能强力驱动
Strongly Powered by AbleSci AI