计算机科学
自动汇总
人工智能
语音识别
抄写(语言学)
情报检索
自然语言处理
多媒体
语言学
哲学
作者
Haoran Li,Junnan Zhu,Cong Ma,Jiajun Zhang,Chengqing Zong
标识
DOI:10.1109/tkde.2018.2848260
摘要
Automatic text summarization is a fundamental natural language processing (NLP) application that aims to condense a source text into a shorter version. The rapid increase in multimedia data transmission over the Internet necessitates multi-modal summarization (MMS) from asynchronous collections of text, image, audio, and video. In this work, we propose an extractive MMS method that unites the techniques of NLP, speech processing, and computer vision to explore the rich information contained in multi-modal data and to improve the quality of multimedia news summarization. The key idea is to bridge the semantic gaps between multi-modal content. Audio and visual are main modalities in the video. For audio information, we design an approach to selectively use its transcription and to infer the salience of the transcription with audio signals. For visual information, we learn the joint representations of text and images using a neural network. Then, we capture the coverage of the generated summary for important visual information through text-image matching or multi-modal topic modeling. Finally, all the multi-modal aspects are considered to generate a textual summary by maximizing the salience, non-redundancy, readability, and coverage through the budgeted optimization of submodular functions. We further introduce a publicly available MMS corpus in English and Chinese.1 The experimental results obtained on our dataset demonstrate that our methods based on image matching and image topic framework outperform other competitive baseline methods.
科研通智能强力驱动
Strongly Powered by AbleSci AI