人工智能
机器人学
计算机科学
异步通信
事件(粒子物理)
国家(计算机科学)
机器视觉
运动(物理)
代表(政治)
机器学习
计算机视觉
机器人
算法
政治
物理
量子力学
计算机网络
法学
政治学
作者
Jacques Kaiser,Rainer Stal,Anand Subramoney,Arne Roennau,Rüdiger Dillmann
标识
DOI:10.1088/1748-3190/aa7663
摘要
Short-term visual prediction is important both in biology and robotics. It allows us to anticipate upcoming states of the environment and therefore plan more efficiently. In theoretical neuroscience, liquid state machines have been proposed as a biologically inspired method to perform asynchronous prediction without a model. However, they have so far only been demonstrated in simulation or small scale pre-processed camera images. In this paper, we use a liquid state machine to predict over the whole [Formula: see text] event stream provided by a real dynamic vision sensor (DVS, or silicon retina). Thanks to the event-based nature of the DVS, the liquid is constantly fed with data when an object is in motion, fully embracing the asynchronicity of spiking neural networks. We propose a smooth continuous representation of the event stream for the short-term visual prediction task. Moreover, compared to previous works (2002 Neural Comput. 2525 282-93 and Burgsteiner H et al 2007 Appl. Intell. 26 99-109), we scale the input dimensionality that the liquid operates on by two order of magnitudes. We also expose the current limits of our method by running experiments in a challenging environment where multiple objects are in motion. This paper is a step towards integrating biologically inspired algorithms derived in theoretical neuroscience to real world robotic setups. We believe that liquid state machines could complement current prediction algorithms used in robotics, especially when dealing with asynchronous sensors.
科研通智能强力驱动
Strongly Powered by AbleSci AI