CASIA OpenIR  > 模式识别国家重点实验室  > 自然语言处理
Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video
Haoran Li1,2; Junnan Zhu1,2; Cong Ma1,2; Jiajun Zhang1,2; Chengqing Zong1,2,3
Source PublicationIEEE Transactions on Knowledge and Data Engineering
ISSN1041-4347
2018
Volume1Issue:1Pages:1
Subtype长文
Abstract

Automatic text summarization is a fundamental natural language processing (NLP) application that aims to condense a source text into a shorter version. The rapid increase in multimedia data transmission over the Internet necessitates multi-modal summarization (MMS) from asynchronous collections of text, image, audio and video. In this work, we propose an extractive MMS method that unites the techniques of NLP, speech processing and computer vision to explore the rich information contained in multi-modal data and to improve the quality of multimedia news summarization. The key idea is to bridge the semantic gaps between multi-modal content. Audio and visual are main modalities in the video. For audio information, we design an approach to selectively use its transcription and to infer the salience of the transcription with audio signals. For visual information, we learn the joint representations of text and images using a neural network. Then, we capture the coverage of the generated summary for important visual information through text-image matching or multi-modal topic modeling. Finally, all the multi-modal aspects are considered to generate a textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce a publicly available MMS corpus in English and Chinese. The experimental results obtained on our dataset demonstrate that our method outperforms other competitive baseline methods.

KeywordMultimedia Summarization Multi-modal Cross-modal Natural Language Processing Computer Vision
Indexed BySCI
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/23106
Collection模式识别国家重点实验室_自然语言处理
Corresponding AuthorJiajun Zhang
Affiliation1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Haoran Li,Junnan Zhu,Cong Ma,et al. Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video[J]. IEEE Transactions on Knowledge and Data Engineering,2018,1(1):1.
APA Haoran Li,Junnan Zhu,Cong Ma,Jiajun Zhang,&Chengqing Zong.(2018).Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video.IEEE Transactions on Knowledge and Data Engineering,1(1),1.
MLA Haoran Li,et al."Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video".IEEE Transactions on Knowledge and Data Engineering 1.1(2018):1.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Haoran Li]'s Articles
[Junnan Zhu]'s Articles
[Cong Ma]'s Articles
Baidu academic
Similar articles in Baidu academic
[Haoran Li]'s Articles
[Junnan Zhu]'s Articles
[Cong Ma]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Haoran Li]'s Articles
[Junnan Zhu]'s Articles
[Cong Ma]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.