卷积神经网络
尖峰神经网络
油藏计算
人工神经网络
边缘计算
能源消耗
能量(信号处理)
作者
Steven K. Esser,Paul Merolla,John V. Arthur,Andrew Cassidy,Raja Appuswamy,Alexander Andreopoulos,David J. Van Den Berg,Jeffrey L. McKinstry,Melano Timothy,Richard Davis,Carmelo di Nolfo,Pallab Datta,Arnon Amir,Brian Taba,Myron Flickner,Dharmendra S. Modha
标识
DOI:10.1073/pnas.1604850113
摘要
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that i) approach state-of-the-art classification accuracy across 8 standard datasets, encompassing vision and speech, ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1200 and 2600 frames per second and using between 25 and 275 mW (effectively > 6000 frames / sec / W) and iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. For the first time, the algorithmic power of deep learning can be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
科研通智能强力驱动
Strongly Powered by AbleSci AI