A Neural Network Training Processor With 8-Bit Shared Exponent Bias Floating Point and Multiple-Way Fused Multiply-Add Trees

人工神经网络 计算机科学 浮点型 计算机工程 深层神经网络 推论 指数 点(几何) 人工智能 计算机硬件 算法 机器学习 并行计算 数学 语言学 哲学 几何学
作者
Jeongwoo Park,Sunwoo Lee,Dongsuk Jeon
出处
期刊:IEEE Journal of Solid-state Circuits [Institute of Electrical and Electronics Engineers]
卷期号:57 (3): 965-977 被引量:2
标识
DOI:10.1109/jssc.2021.3103603
摘要

Recent advances in deep neural networks (DNNs) and machine learning algorithms have induced the demand for services based on machine learning algorithms that require a large number of computations, and specialized hardware ranging from accelerators for data centers to on-device computing systems have been introduced. Low-precision math such as 8-bit integers have been used in neural networks for energy-efficient neural network inference, but training with low-precision numbers without performance degradation have remained to be a challenge. To overcome this challenge, this article presents an 8-bit floating-point neural network training processor for state-of-the-art non-sparse neural networks. As naïve 8-bit floating-point numbers are insufficient for training DNNs robustly, two additional methods are introduced to ensure high-performance DNN training. First, a novel numeric system which we dub as 8-bit floating point with shared exponent bias (FP8-SEB) is introduced. Moreover, multiple-way fused multiply-add (FMA) trees are used in FP8-SEB’s hardware implementation to ensure higher numerical precision and reduced energy. FP8-SEB format combined with multiple-way FMA trees is evaluated under various scenarios to show a trained-from-scratch performance that is close to or even surpasses that of current networks trained with full-precision (FP32). Our silicon-verified DNN training processor utilizes 24-way FMA trees implemented with FP8-SEB math and flexible 2-D routing schemes to show $2.48\times $ higher energy efficiency than prior low-power neural network training processors and $78.1\times $ lower energy than standard GPUs.
最长约 10秒,即可获得该文献文件

科研通智能强力驱动
Strongly Powered by AbleSci AI
科研通是完全免费的文献互助平台,具备全网最快的应助速度,最高的求助完成率。 对每一个文献求助,科研通都将尽心尽力,给求助人一个满意的交代。
实时播报
李健应助chrysan采纳,获得10
2秒前
顾矜应助ChencanFang采纳,获得20
2秒前
郝好完成签到 ,获得积分10
4秒前
7秒前
9℃完成签到 ,获得积分10
9秒前
sharks完成签到,获得积分10
10秒前
10秒前
天天快乐应助手可摘星辰采纳,获得10
11秒前
11秒前
11秒前
12秒前
lynn完成签到 ,获得积分10
13秒前
16秒前
123456完成签到 ,获得积分10
16秒前
学术通zzz发布了新的文献求助10
16秒前
王小乐发布了新的文献求助10
17秒前
一二发布了新的文献求助10
18秒前
黑糖珍珠完成签到 ,获得积分10
19秒前
Hello应助踏雪飞鸿采纳,获得10
19秒前
chrysan发布了新的文献求助10
20秒前
20秒前
24秒前
25秒前
cx完成签到 ,获得积分10
27秒前
稀饭发布了新的文献求助10
27秒前
ChencanFang发布了新的文献求助20
27秒前
28秒前
ll发布了新的文献求助10
33秒前
无辜的豌豆完成签到 ,获得积分10
33秒前
34秒前
38秒前
老虎皮完成签到,获得积分10
42秒前
天天快乐应助haifang采纳,获得10
43秒前
学术通zzz发布了新的文献求助10
45秒前
正直的松鼠完成签到 ,获得积分10
45秒前
柴柴子完成签到 ,获得积分10
47秒前
眼睛大冬日完成签到 ,获得积分10
48秒前
小马甲应助科研通管家采纳,获得10
48秒前
是真灵还是机灵完成签到 ,获得积分10
48秒前
科研通AI2S应助科研通管家采纳,获得10
48秒前
高分求助中
【此为提示信息,请勿应助】请按要求发布求助,避免被关 20000
ISCN 2024 – An International System for Human Cytogenomic Nomenclature (2024) 3000
Continuum Thermodynamics and Material Modelling 2000
Encyclopedia of Geology (2nd Edition) 2000
105th Edition CRC Handbook of Chemistry and Physics 1600
Maneuvering of a Damaged Navy Combatant 650
Fashion Brand Visual Design Strategy Based on Value Co-creation 350
热门求助领域 (近24小时)
化学 材料科学 医学 生物 工程类 有机化学 物理 生物化学 纳米技术 计算机科学 化学工程 内科学 复合材料 物理化学 电极 遗传学 量子力学 基因 冶金 催化作用
热门帖子
关注 科研通微信公众号,转发送积分 3777918
求助须知:如何正确求助?哪些是违规求助? 3323510
关于积分的说明 10214551
捐赠科研通 3038674
什么是DOI,文献DOI怎么找? 1667606
邀请新用户注册赠送积分活动 798207
科研通“疑难数据库(出版商)”最低求助积分说明 758315