卷积神经网络
位(键)
模式(计算机接口)
计算机科学
比例(比率)
逻辑门
电子工程
光电子学
算法
人工智能
材料科学
物理
工程类
计算机网络
量子力学
操作系统
作者
Yuchen Cai,Jia Yang,Yutang Hou,Feng Wang,Lei Yin,Shu‐Hui Li,Yanrong Wang,Tao Yan,Shancheng Yan,Xueying Zhan,Jun He,Zhenxing Wang
标识
DOI:10.1038/s41467-025-58005-z
摘要
The fast development of artificial intelligence has called for high-efficiency neuromorphic computing hardware. While two-dimensional floating-gate memories show promise, their limited state numbers and stability hinder practical use. Here, we report gate-injection-mode two-dimensional floating-gate memories as a candidate for large-scale neural network accelerators. Through a coplanar device structure design and a bi-pulse state programming strategy, 8-bit states with intervals larger than three times of the standard deviations and stability over 10,000 s are achieved at 3 V. The cycling endurance is over 105 and the fabricated 256 devices show a yield of 94.9%. Leveraging this, we carry out experimental image convolutions and 38,592 kernels transplanting on an integrated 9 × 2 array that exhibits results matching well with simulations. We also show that fix-point neural networks with 8-bit precision have inference accuracies approaching the ideal values. Our work validates the potential of gate-injection-mode two-dimensional floating-gate memories for high-efficiency neuromorphic computing hardware. Cai et al. report gate-injection-mode floating-gate memories based on MoS2 channel. The coplanar device design and bi-pulse state programming enable 8-bit conductance states at 3 V. Image convolutions are implemented and 38,592 convolutional kernel parameters are projected on a 9 × 2 device array.
科研通智能强力驱动
Strongly Powered by AbleSci AI