强化学习
人工智能
计算机科学
视觉伺服
稳健性(进化)
主动视觉
计算机视觉
眼动
视频跟踪
水准点(测量)
基线(sea)
目标检测
对象(语法)
机器人
模式识别(心理学)
地质学
化学
海洋学
基因
地理
生物化学
大地测量学
作者
Dong Zhou,Guanghui Sun,Wenxiao Lei,Ligang Wu
标识
DOI:10.1109/taes.2022.3211246
摘要
Actively tracking an arbitrary space noncooperative object relied on visual sensor remains a challenging problem. In this article, we provide an open-source benchmark for space noncooperative object visual tracking including simulated environment, evaluation toolkit, and a position-based visual servoing (PBVS) baseline algorithm, which can facilitate the research in this topic especially for those methods based on deep reinforcement learning. We also present an end-to-end active visual tracker based on deep Q-learning, named as DRLAVT, which learns approximately optimal policy merely took color or RGBD images as input. To the best of authors knowledge, it is the first intelligent agent used for active visual tracking in aerospace domain. The experiment results show that our DRLAVT achieves an excellent robustness and real-time performance compared with the PBVS baseline, benefitted from the design of complex neural network and efficient reward function. In addition, the multiple targets training adopted in this article effectively guarantees the transferability of DRLAVT by forcing the agent to learn optimal control policy with respect to motion patterns of the target.
科研通智能强力驱动
Strongly Powered by AbleSci AI