人工神经网络
培训(气象学)
计算机科学
神经科学
人工智能
心理学
物理
气象学
作者
Byoungwoo Lee,Wonjae Ji,Hyejin Kim,Seungmin Han,Geonwoong Park,Pyeongkang Hur,Gilsu Jeon,Hyung‐Min Lee,Yoonyoung Chung,Junwoo Son,Yong‐Young Noh,Seyoung Kim
标识
DOI:10.1002/aisy.202400600
摘要
Analog in‐memory computing, leveraging resistive switching cross‐point devices known as resistive processing units (RPUs), offers substantial improvements in the performance and energy efficiency of deep neural network (DNN) training. Among the promising candidates for RPU devices, the capacitor‐based synaptic circuit stands out due to its near‐ideal switching characteristics. However, despite its potential, challenges such as large cell areas and retention issues remain to be addressed. In this work, we study the three‐transistors‐one‐capacitor synaptic cell design, aiming to enhance computing performance and scalability. Through comprehensive device‐level modeling and system‐level simulation, assessment is done on how the transistor characteristics influence DNN training accuracy and reveal critical design strategies. A novel cell design methodology that optimizes computing performance while minimizing cell area is proposed, thereby enhancing scalability. Additionally, development guidelines for cell components are provided, identifying oxide‐based semiconductors as a promising channel material for transistors. This research contributes valuable insights for the development of future analog DNN training accelerators using capacitor‐based synaptic cell, with a focus on addressing the current limitations and maximizing efficiency.
科研通智能强力驱动
Strongly Powered by AbleSci AI