晶体管
材料科学
计算机科学
多路复用
人工神经网络
深度学习
纳米技术
计算机体系结构
人工智能
电气工程
工程类
电信
电压
作者
Yutao Li,Kui Xu,Yuzhe Ma,Junze Li,Yang Luo,Xiaoyan Li,Penghui Shen,Luyu Zhao,Hang Liu,Li Ren,Dehui Li,Lian‐Mao Peng,Li Ding,Tian‐Ling Ren,Yeliang Wang
标识
DOI:10.1002/adma.202420218
摘要
Abstract Deep learning's growing complexity demands advanced AI chips, increasing hardware costs. Time‐division multiplexing (TDM) neural networks offer a promising solution to simplify integration. However, it is difficult for current synapse transistors to physically implement TDM networks due to inherent device limitations, hindering their practical deployment in modern systems. Here, a novel graphene/2D perovskite/carbon nanotubes (CNTs) synapse transistor featuring a sandwich structure is presented. This transistor enables the realization of TDM neural networks at the hardware level. In this structure, the 2D perovskite layer, characterized by high ion concentration, serves as a neurotransmitter, thereby enhancing synaptic transmission efficiency. Additionally, the CNTs' field‐effect transistors, with their large on‐off ratio, demonstrate a wider range of synaptic current changes. The device mechanism is theoretically analyzed using molecular dynamics simulation. Furthermore, the impact of TDM on the scale, power, and latency of neural network hardware implementation is investigated. Qualitative analysis is performed to elucidate the advantages of TDM in the hardware implementation of larger deep learning models. This study offers a new approach to reducing the integration complexity of neural networks hardware implementation, holding significant promise for the development of intelligent nanoelectronic devices in the future.
科研通智能强力驱动
Strongly Powered by AbleSci AI