电阻随机存取存储器
材料科学
推论
导电体
炸薯条
光电子学
培训(气象学)
氧化物
纳米技术
电气工程
计算机科学
人工智能
电压
冶金
复合材料
工程类
气象学
物理
作者
Donato Francesco Falcone,Victoria Clerico,Wooseok Choi,Tommaso Stecconi,Folkert Horst,Laura Bégon‐Lours,Matteo Galetta,Antonio La Porta,Nikhil Garg,Fabien Alibart,Bert Jan Offrein,Valeria Bragaglia
标识
DOI:10.1002/adfm.202504688
摘要
Abstract Analog in‐memory computing is an emerging paradigm designed to efficiently accelerate deep neural network workloads. Recent advancements have focused on either inference or training acceleration. However, a unified analog in‐memory technology platform–capable of on‐chip training, weight retention, and long‐term inference acceleration–has yet to be reported. This work presents an all‐in‐one analog AI accelerator, combining these capabilities to enable energy‐efficient, continuously adaptable AI systems. The platform leverages an array of analog filamentary conductive‐metal‐oxide (CMO)/HfO x resistive switching memory cells (ReRAM) integrated into the back‐end‐of‐line (BEOL). The array demonstrates reliable resistive switching with voltage amplitudes below 1.5 V, compatible with advanced technology nodes. The array's multi‐bit capability (over 32 stable states) and low programming noise (down to 10 nS) enable a nearly ideal weight transfer process, more than an order of magnitude better than other memristive technologies. Inference performance is validated through matrix‐vector multiplication simulations on a 64 × 64 array, achieving a root‐mean‐square error improvement by a factor of 20 at 1 s and 3 at 10 years after programming, compared to state‐of‐the‐art. Training accuracy closely matching the software equivalent is achieved across different datasets. The CMO/HfO x ReRAM technology lays the foundation for efficient analog systems accelerating both inference and training in deep neural networks.
科研通智能强力驱动
Strongly Powered by AbleSci AI