Learning explicit video attributes from mid-level representation for video captioning
Nian, Fudong1; Li, Teng1,2; Wang, Yan1; Wu, Xinyu3; Ni, Bingbing4; Xu, Changsheng2
发表期刊COMPUTER VISION AND IMAGE UNDERSTANDING
2017-10-01
卷号163页码:126-138
文章类型Article
摘要Recent works on video captioning mainly learn the map from low-level visual features to language description directly without explicitly representing the high-level semantic video concepts (e.g. objects, actions in the annotated sentences). To bridge the semantic gap, in this paper, addressing it, we propose a novel video attribute representation learning algorithm for video concept understanding and utilize the learned explicit video attribute representation to improve video captioning performance. To achieve it, firstly, inspired by the success of spectrogram in audio processing, a novel mid-level video representation named "video response map" (VRM) is proposed, by which the frame sequence could be represented by a single image representation. Therefore, the video attributes representation learning could be converted to a well-studied multi-label image classification problem. Then in the captions prediction step, the learned video attributes feature is integrated with the single frame feature to improve previous sequence-to sequence language generation model by adjusting the LSTM (Long-Short Term Memory) input units. The proposed video captioning framework could both handle variable frame inputs and utilize high-level semantic video attribute features. Experimental results on video captioning tasks show that the proposed method, utilizing only RGB frames as input without extra video or text training data, could achieve competitive performance with state-of-the-art methods. Furthermore, the extensive experimental evaluations on the UCF-101 action classification benchmark well demonstrate the representation capability of the proposed VRM. (C) 2017 Elsevier Inc. All rights reserved.
关键词Mid-level Video Representation Video Attributes Learning Video Caption Sequence-to-sequence Learning
WOS标题词Science & Technology ; Technology
DOI10.1016/j.cviu.2017.06.012
收录类别SCI
语种英语
项目资助者National Natural Science Foundation (NSF) of China(61572029) ; China Postdoctoral Science Foundation(156613 ; 2016T90148)
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS记录号WOS:000418726800011
引用统计
被引频次:21[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/20758
专题多模态人工智能系统全国重点实验室_多媒体计算
作者单位1.Anhui Univ, Minist Educ, Key Lab Intelligent Comp & Signal Proc, Hefei, Anhui, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
3.Chinese Acad Sci, Shenzhen Inst Adv Technol, Beijing, Peoples R China
4.Shandong Jiaotong Univ, Shanghai, Peoples R China
推荐引用方式
GB/T 7714
Nian, Fudong,Li, Teng,Wang, Yan,et al. Learning explicit video attributes from mid-level representation for video captioning[J]. COMPUTER VISION AND IMAGE UNDERSTANDING,2017,163:126-138.
APA Nian, Fudong,Li, Teng,Wang, Yan,Wu, Xinyu,Ni, Bingbing,&Xu, Changsheng.(2017).Learning explicit video attributes from mid-level representation for video captioning.COMPUTER VISION AND IMAGE UNDERSTANDING,163,126-138.
MLA Nian, Fudong,et al."Learning explicit video attributes from mid-level representation for video captioning".COMPUTER VISION AND IMAGE UNDERSTANDING 163(2017):126-138.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Nian, Fudong]的文章
[Li, Teng]的文章
[Wang, Yan]的文章
百度学术
百度学术中相似的文章
[Nian, Fudong]的文章
[Li, Teng]的文章
[Wang, Yan]的文章
必应学术
必应学术中相似的文章
[Nian, Fudong]的文章
[Li, Teng]的文章
[Wang, Yan]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。