CASIA OpenIR  > 模式识别国家重点实验室  > 自然语言处理
Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video
Haoran Li1,2; Junnan Zhu1,2; Cong Ma1,2; Jiajun Zhang1,2; Chengqing Zong1,2,3
发表期刊IEEE Transactions on Knowledge and Data Engineering
ISSN1041-4347
2018
卷号1期号:1页码:1
文章类型长文
摘要

Automatic text summarization is a fundamental natural language processing (NLP) application that aims to condense a source text into a shorter version. The rapid increase in multimedia data transmission over the Internet necessitates multi-modal summarization (MMS) from asynchronous collections of text, image, audio and video. In this work, we propose an extractive MMS method that unites the techniques of NLP, speech processing and computer vision to explore the rich information contained in multi-modal data and to improve the quality of multimedia news summarization. The key idea is to bridge the semantic gaps between multi-modal content. Audio and visual are main modalities in the video. For audio information, we design an approach to selectively use its transcription and to infer the salience of the transcription with audio signals. For visual information, we learn the joint representations of text and images using a neural network. Then, we capture the coverage of the generated summary for important visual information through text-image matching or multi-modal topic modeling. Finally, all the multi-modal aspects are considered to generate a textual summary by maximizing the salience, non-redundancy, readability and coverage through the budgeted optimization of submodular functions. We further introduce a publicly available MMS corpus in English and Chinese. The experimental results obtained on our dataset demonstrate that our method outperforms other competitive baseline methods.

关键词Multimedia Summarization Multi-modal Cross-modal Natural Language Processing Computer Vision
收录类别SCI
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/23106
专题模式识别国家重点实验室_自然语言处理
通讯作者Jiajun Zhang
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences
推荐引用方式
GB/T 7714
Haoran Li,Junnan Zhu,Cong Ma,et al. Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video[J]. IEEE Transactions on Knowledge and Data Engineering,2018,1(1):1.
APA Haoran Li,Junnan Zhu,Cong Ma,Jiajun Zhang,&Chengqing Zong.(2018).Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video.IEEE Transactions on Knowledge and Data Engineering,1(1),1.
MLA Haoran Li,et al."Read, Watch, Listen and Summarize: Multi-modal Summarization for Asynchronous Text, Image, Audio and Video".IEEE Transactions on Knowledge and Data Engineering 1.1(2018):1.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Haoran Li]的文章
[Junnan Zhu]的文章
[Cong Ma]的文章
百度学术
百度学术中相似的文章
[Haoran Li]的文章
[Junnan Zhu]的文章
[Cong Ma]的文章
必应学术
必应学术中相似的文章
[Haoran Li]的文章
[Junnan Zhu]的文章
[Cong Ma]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。