计算机科学
深度学习
物理不可克隆功能
计算机安全
人工智能
密码学
作者
Wenshuo Yue,Kai Wu,Zhiyuan Li,Jing Zhou,Zeyu Wang,Teng Zhang,Yuxiang Yang,Lintao Ye,Yongqin Wu,Weihai Bu,Shaozhi Wang,Xiaodong He,Xiaobing Yan,Yaoyu Tao,Bonan Yan,Ru Huang,Yuchao Yang
标识
DOI:10.1038/s41467-025-56412-w
摘要
Compute-in-memory based on resistive random-access memory has emerged as a promising technology for accelerating neural networks on edge devices. It can reduce frequent data transfers and improve energy efficiency. However, the nonvolatile nature of resistive memory raises concerns that stored weights can be easily extracted during computation. To address this challenge, we propose RePACK, a threefold data protection scheme that safeguards neural network input, weight, and structural information. It utilizes a bipartite-sort coding scheme to store data with a fully on-chip physical unclonable function. Experimental results demonstrate the effectiveness of increasing enumeration complexity to 5.77 × 1075 for a 128-column compute-in-memory core. We further implement and evaluate a RePACK computing system on a 40 nm resistive memory compute-in-memory chip. This work represents a step towards developing safe, robust, and efficient edge neural network accelerators. It potentially serves as the hardware infrastructure for edge devices in federated learning or other systems. Emerging compute-in-memory technologies show potential in edge AI; however, information protection tools need further development. Here, authors propose an on-chip scheme to simultaneously protect neural network input, weight, and structural information with low circuit overhead.
科研通智能强力驱动
Strongly Powered by AbleSci AI