计算机科学
国际商用机器公司
推论
Python(编程语言)
硬件加速
计算机硬件
人工神经网络
云计算
IBM PC兼容机
延迟(音频)
计算机体系结构
计算机工程
嵌入式系统
人工智能
操作系统
软件
现场可编程门阵列
纳米技术
材料科学
电信
作者
Manuel Le Gallo,Corey Lammie,Julian Büchel,Fabio Carta,Omobayode Fagbohungbe,Charles Mackin,Hsinyu Tsai,Vijay Narayanan,Abu Sebastian,Kaoutar El Maghraoui,Malte J. Rasch
摘要
Analog In-Memory Computing (AIMC) is a promising approach to reduce the latency and energy consumption of Deep Neural Network (DNN) inference and training. However, the noisy and non-linear device characteristics and the non-ideal peripheral circuitry in AIMC chips require adapting DNNs to be deployed on such hardware to achieve equivalent accuracy to digital computing. In this Tutorial, we provide a deep dive into how such adaptations can be achieved and evaluated using the recently released IBM Analog Hardware Acceleration Kit (AIHWKit), freely available at https://github.com/IBM/aihwkit. AIHWKit is a Python library that simulates inference and training of DNNs using AIMC. We present an in-depth description of the AIHWKit design, functionality, and best practices to properly perform inference and training. We also present an overview of the Analog AI Cloud Composer, a platform that provides the benefits of using the AIHWKit simulation in a fully managed cloud setting along with physical AIMC hardware access, freely available at https://aihw-composer.draco.res.ibm.com. Finally, we show examples of how users can expand and customize AIHWKit for their own needs. This Tutorial is accompanied by comprehensive Jupyter Notebook code examples that can be run using AIHWKit, which can be downloaded from https://github.com/IBM/aihwkit/tree/master/notebooks/tutorial.
科研通智能强力驱动
Strongly Powered by AbleSci AI