计算机科学
人工智能
构造(python库)
任务(项目管理)
可扩展性
水准点(测量)
视频跟踪
对象(语法)
变压器
目标检测
参照物
表达式(计算机科学)
计算机视觉
自然语言处理
机器学习
分割
程序设计语言
大地测量学
语言学
地理
量子力学
管理
电压
经济
数据库
哲学
物理
作者
Dongming Wu,Wencheng Han,Tiancai Wang,Xingping Dong,Xiangyu Zhang,Jianbing Shen
标识
DOI:10.1109/cvpr52729.2023.01406
摘要
Existing referring understanding tasks tend to involve the detection of a single text-referred object. In this paper, we propose a new and general referring understanding task, termed referring multi-object tracking (RMOT). Its core idea is to employ a language expression as a semantic cue to guide the prediction of multi-object tracking. To the best of our knowledge, it is the first work to achieve an arbitrary number of referent object predictions in videos. To push forward RMOT, we construct one benchmark with scalable expressions based on KITTI, named Refer-KITTI. Specifically, it provides 18 videos with 818 expressions, and each expression in a video is annotated with an average of 10.7 objects. Further, we develop a transformer-based architecture TransRMOT to tackle the new task in an online manner, which achieves impressive detection performance and out-performs other counterparts. The Refer-KITTI dataset and the code are released at https://referringmot.github.io.
科研通智能强力驱动
Strongly Powered by AbleSci AI