混淆
计算机科学
计算机安全
方案(数学)
数字水印
知识产权
编码(集合论)
衬垫
人工神经网络
人工智能
程序设计语言
操作系统
数学
图像(数学)
数学分析
集合(抽象数据类型)
作者
Tong Zhou,Yukui Luo,Shaolei Ren,Xiaolin Xu
出处
期刊:Cornell University - arXiv
日期:2023-04-28
标识
DOI:10.48550/arxiv.2305.00097
摘要
As a type of valuable intellectual property (IP), deep neural network (DNN) models have been protected by techniques like watermarking. However, such passive model protection cannot fully prevent model abuse. In this work, we propose an active model IP protection scheme, namely NNSplitter, which actively protects the model by splitting it into two parts: the obfuscated model that performs poorly due to weight obfuscation, and the model secrets consisting of the indexes and original values of the obfuscated weights, which can only be accessed by authorized users with the support of the trusted execution environment. Experimental results demonstrate the effectiveness of NNSplitter, e.g., by only modifying 275 out of over 11 million (i.e., 0.002%) weights, the accuracy of the obfuscated ResNet-18 model on CIFAR-10 can drop to 10%. Moreover, NNSplitter is stealthy and resilient against norm clipping and fine-tuning attacks, making it an appealing solution for DNN model protection. The code is available at: https://github.com/Tongzhou0101/NNSplitter.
科研通智能强力驱动
Strongly Powered by AbleSci AI