可解释性
稳健性(进化)
概化理论
可扩展性
人工智能
计算机科学
深度学习
适应性
机器学习
人工神经网络
生物化学
化学
数据库
基因
生态学
统计
数学
生物
作者
Mathias Lechner,Ramin Hasani,Alexander Amini,Thomas A. Henzinger,Daniela Rus,Radu Grosu
标识
DOI:10.1038/s42256-020-00237-3
摘要
A central goal of artificial intelligence in high-stakes decision-making applications is to design a single algorithm that simultaneously expresses generalizability by learning coherent representations of their world and interpretable explanations of its dynamics. Here, we combine brain-inspired neural computation principles and scalable deep learning architectures to design compact neural controllers for task-specific compartments of a full-stack autonomous vehicle control system. We discover that a single algorithm with 19 control neurons, connecting 32 encapsulated input features to outputs by 253 synapses, learns to map high-dimensional inputs into steering commands. This system shows superior generalizability, interpretability and robustness compared with orders-of-magnitude larger black-box learning systems. The obtained neural agents enable high-fidelity autonomy for task-specific parts of a complex autonomous system. Inspired by the brain of the roundworm Caenorhabditis elegans, the authors design a highly compact neural network controller directly from raw input pixels. Compared with larger networks, this compact controller demonstrates improved generalization, robustness and interpretability on a lane-keeping task.
科研通智能强力驱动
Strongly Powered by AbleSci AI