视觉伺服
稳健性(进化)
人工智能
计算机科学
卷积神经网络
计算机视觉
端到端原则
先验与后验
特征(语言学)
图像(数学)
生物化学
化学
哲学
语言学
认识论
基因
作者
Aseem Saxena,Harit Pandya,Gourav Kumar,Ayush Gaud,K. Madhava Krishna
出处
期刊:Cornell University - arXiv
日期:2017-01-01
标识
DOI:10.48550/arxiv.1706.03220
摘要
Present image based visual servoing approaches rely on extracting hand crafted visual features from an image. Choosing the right set of features is important as it directly affects the performance of any approach. Motivated by recent breakthroughs in performance of data driven methods on recognition and localization tasks, we aim to learn visual feature representations suitable for servoing tasks in unstructured and unknown environments. In this paper, we present an end-to-end learning based approach for visual servoing in diverse scenes where the knowledge of camera parameters and scene geometry is not available a priori. This is achieved by training a convolutional neural network over color images with synchronised camera poses. Through experiments performed in simulation and on a quadrotor, we demonstrate the efficacy and robustness of our approach for a wide range of camera poses in both indoor as well as outdoor environments.
科研通智能强力驱动
Strongly Powered by AbleSci AI