计算机科学
判决
优势和劣势
构造(python库)
分类学(生物学)
模式
自然语言
情报检索
光学(聚焦)
自然语言处理
人工智能
桥接(联网)
自然语言生成
数据科学
心理学
社会科学
植物
物理
社会学
光学
生物
程序设计语言
社会心理学
计算机网络
作者
Hao Zhang,Aixin Sun,Wei Jing,Joey Tianyi Zhou
标识
DOI:10.1109/tpami.2023.3258628
摘要
Temporal sentence grounding in videos (TSGV), a.k.a., natural language video localization (NLVL) or video moment retrieval (VMR), aims to retrieve a temporal moment that semantically corresponds to a language query from an untrimmed video. Connecting computer vision and natural language, TSGV has drawn significant attention from researchers in both communities. This survey attempts to provide a summary of fundamental concepts in TSGV and current research status, as well as future research directions. As the background, we present a common structure of functional components in TSGV, in a tutorial style: from feature extraction from raw video and language query, to answer prediction of the target moment. Then we review the techniques for multimodal understanding and interaction, which is the key focus of TSGV for effective alignment between the two modalities. We construct a taxonomy of TSGV techniques and elaborate the methods in different categories with their strengths and weaknesses. Lastly, we discuss issues with the current TSGV research and share our insights about promising research directions.
科研通智能强力驱动
Strongly Powered by AbleSci AI