电阻随机存取存储器
计算机科学
静态随机存取存储器
水准点(测量)
计算机体系结构
瓶颈
半导体存储器
人工神经网络
深度学习
计算机工程
吞吐量
内存体系结构
嵌入式系统
计算机硬件
人工智能
工程类
电压
电气工程
地理
无线
电信
大地测量学
作者
Shimeng Yu,Hongwu Jiang,Shanshi Huang,Xiaochen Peng,Anni Lu
出处
期刊:IEEE Circuits and Systems Magazine
[Institute of Electrical and Electronics Engineers]
日期:2021-01-01
卷期号:21 (3): 31-56
被引量:111
标识
DOI:10.1109/mcas.2021.3092533
摘要
Compute-in-memory (CIM) is a new computing paradigm that addresses the memory-wall problem in hardware accelerator design for deep learning. The input vector and weight matrix multiplication, i.e., the multiply-and-accumulate (MAC) operation, could be performed in the analog domain within memory sub-array, leading to significant improvements in throughput and energy efficiency. Static random access memory (SRAM) and emerging non-volatile memories such as resistive random access memory (RRAM) are promising candidates to store the weights of deep neural network (DNN) models. In this review, firstly we survey the recent progresses in SRAM and RRAM based CIM macros that have been demonstrated in silicon. Then we discuss general design challenges of the CIM chips including analog-to-digital conversion (ADC) bottleneck, variations in analog compute, and device non-idealities. Next we introduce the DNN+NeuroSim benchmark framework that is capable of evaluating versatile device technologies for CIM inference and training performance from software/hardware co-design's perspective.
科研通智能强力驱动
Strongly Powered by AbleSci AI