计算机科学
对话框
任务(项目管理)
背景(考古学)
多通道交互
多模态
信息流
人工智能
模态(人机交互)
语音识别
人机交互
自然语言处理
多媒体
万维网
语言学
哲学
古生物学
经济
生物
管理
作者
Zhe Chen,Hongcheng Liu,Yu Wang
出处
期刊:IEEE/ACM transactions on audio, speech, and language processing
[Institute of Electrical and Electronics Engineers]
日期:2024-01-01
卷期号:32: 753-764
标识
DOI:10.1109/taslp.2023.3284511
摘要
In recent years, Audio Visual Scene-Aware Dialog (AVSD) has been an active research task in the multimodal dialogue community and has also been a core part of the Dialog System Technology Challenge (DSTC). This task is an extension of conventional visual question answering, where video-relevant answers must be generated taking into account multimodal contextual information from previous dialogue rounds. Despite recent advances in the AVSD task, there are still two major challenges in developing such a system: how to model the multimodal contextual information of multiple rounds of dialogues and how to integrate audio-visual information into the generation of textual responses. To tackle these two challenges, in this paper we propose a novel model, named DialogMCF, which constructs a multimodal context flow model to generate responses that are relevant to video scenes. This proposed context flow modeling can track the dynamics of the topic information across multiple rounds of dialogue history. To achieve an effective fusion of multimodal information, we propose an audio-visual memory network with cross-modality aligned features to model long multimodal dialogue context, and thus enhance the flow modeling. Furthermore, we make attempts to improve the performance of the proposed DialogMCF model with manual descriptions and explore the incorporation of temporal reasoning information. Extensive experiments on the DSTC AVSD datasets show that, compared to a range of baseline methods, the proposed method can yield state-of-art dialogue generation performance on most metrics when integrating the video descriptions.
科研通智能强力驱动
Strongly Powered by AbleSci AI