计算机科学
人工智能
背景(考古学)
深度学习
人工神经网络
循环神经网络
多样性(控制论)
机器学习
模仿
控制(管理)
忠诚
心理学
古生物学
电信
生物
社会心理学
作者
Charles Vorbach,Ramin Hasani,Alexander Amini,Mathias Lechner,Daniela Rus
出处
期刊:Cornell University - arXiv
日期:2021-06-15
被引量:25
标识
DOI:10.48550/arxiv.2106.08314
摘要
Imitation learning enables high-fidelity, vision-based learning of policies within rich, photorealistic environments. However, such techniques often rely on traditional discrete-time neural models and face difficulties in generalizing to domain shifts by failing to account for the causal relationships between the agent and the environment. In this paper, we propose a theoretical and experimental framework for learning causal representations using continuous-time neural networks, specifically over their discrete-time counterparts. We evaluate our method in the context of visual-control learning of drones over a series of complex tasks, ranging from short- and long-term navigation, to chasing static and dynamic objects through photorealistic environments. Our results demonstrate that causal continuous-time deep models can perform robust navigation tasks, where advanced recurrent models fail. These models learn complex causal control representations directly from raw visual inputs and scale to solve a variety of tasks using imitation learning.
科研通智能强力驱动
Strongly Powered by AbleSci AI