计算机科学
水印
数字水印
计算机安全
级联
人工智能
图像(数学)
色谱法
化学
作者
Ruoxi Wang,Yujia Zhu,Daoxun Xia
摘要
ABSTRACT Successfully training a model requires substantial computational power, excellent model design, and high training costs, which implies that a well‐trained model holds significant commercial value. Protecting a trained Deep Neural Network (DNN) model from Intellectual Property (IP) infringement has become a matter of intense concern recently. Particularly, embedding and verifying watermarks in black‐box models without accessing internal model parameters, while ensuring the robustness and invisibility of the watermark, remains a challenging issue. Unlike many existing methods, we propose a cascade ownership verification framework based on invisible watermarks, with a focus on how to effectively protect the copyright of black‐box watermark models and detect unauthorized users' infringement behaviors. This framework consists of two parts: watermark generation and copyright verification. In the watermark generation phase, watermarked samples are generated from key samples and label images. The difference between watermarked samples and key samples is imperceptible, while a specific identifier has been injected into the watermarked samples, leaving a backdoor as an entry point for copyright verification. The copyright verification phase employs hypothesis testing to enhance the confidence level of verification. In image classification tasks based on MNIST, CIFAR‐10, and CIFAR‐100 datasets, experiments were conducted on several popular deep learning models. The experimental results show that this framework offers high security and effectiveness in protecting model copyrights and demonstrates strong robustness against pruning and fine‐tuning attacks.
科研通智能强力驱动
Strongly Powered by AbleSci AI