计算机科学
答疑
合并(版本控制)
人工智能
杠杆(统计)
推论
判决
帧(网络)
特征(语言学)
自然语言处理
情报检索
语言学
电信
哲学
作者
Liang Xiao,Di Wang,Quan Wang,Bo Wan,Lingling An,Lihuo He
标识
DOI:10.1145/3581783.3613909
摘要
Video Question Answering (VideoQA) aims to comprehend intricate relationships, actions, and events within video content, as well as the inherent links between objects and scenes, to answer text-based questions accurately. Transferring knowledge from the cross-modal pre-trained model CLIP is a natural approach, but its dual-tower structure hinders fine-grained modality interaction, posing challenges for direct application to VideoQA tasks. To address this issue, we introduce a Language-Guided Visual Aggregation (LGVA) network. It employs CLIP as an effective feature extractor to obtain language-aligned visual features with different granularities and avoids resource-intensive video pre-training. The LGVA network progressively aggregates visual information in a bottom-up manner, focusing on both regional and temporal levels, and ultimately facilitating accurate answer prediction. More specifically, it employs local cross-attention to combine pre-extracted question tokens and region embeddings, pinpointing the object of interest in the question. Then, graph attention is utilized to aggregate regions at the frame level and integrate additional captions for enhanced detail. Following this, global cross-attention is used to merge sentence and frame-level embeddings, identifying the video segment relevant to the question. Ultimately, contrastive learning is applied to optimize the similarities between aggregated visual and answer embeddings, unifying upstream and downstream tasks. Our method conserves resources by avoiding large-scale video pre-training and simultaneously demonstrates commendable performance on the NExT-QA, MSVD-QA, MSRVTT-QA, TGIF-QA, and ActivityNet-QA datasets, even outperforming some end-to-end trained models. Our code is available at https://github.com/ecoxial2007/LGVA_VideoQA.
科研通智能强力驱动
Strongly Powered by AbleSci AI