Cross-Modal Retrieval via Deep and Bidirectional Representation Learning
He, Yonghao1; Xiang, Shiming1; Kang, Cuicui2; Wang, Jian1; Pan, Chunhong1; Xiang,Shiming
发表期刊IEEE TRANSACTIONS ON MULTIMEDIA
2016-07-01
卷号18期号:7页码:1363-1377
文章类型Article
摘要Cross-modal retrieval emphasizes understanding inter-modality semantic correlations, which is often achieved by designing a similarity function. Generally, one of the most important things considered by the similarity function is how to make the cross-modal similarity computable. In this paper, a deep and bidirectional representation learning model is proposed to address the issue of image-text cross-modal retrieval. Owing to the solid progress of deep learning in computer vision and natural language processing, it is reliable to extract semantic representations from both raw image and text data by using deep neural networks. Therefore, in the proposed model, two convolution-based networks are adopted to accomplish representation learning for images and texts. By passing the networks, images and texts are mapped to a common space, in which the cross-modal similarity is measured by cosine distance. Subsequently, a bidirectional network architecture is designed to capture the property of the cross-modal retrieval-the bidirectional search. Such architecture is characterized by simultaneously involving the matched and unmatched image-text pairs for training. Accordingly, a learning framework with maximum likelihood criterion is finally developed. The network parameters are optimized via backpropagation and stochastic gradient descent. A great deal of experiments are conducted to sufficiently evaluate the proposed method on three publicly released datasets: IAPRTC-12, Flickr30k, and Flickr8k. The overall results definitely show that the proposed architecture is effective and the learned representations have good semantics to achieve superior cross-modal retrieval performance.
关键词Bidirectional Modeling Convolutional Neural Network Cross-modal Retrieval Representation Learning Word Embedding
WOS标题词Science & Technology ; Technology
DOI10.1109/TMM.2016.2558463
关键词[WOS]MODELS
收录类别SCI
语种英语
项目资助者National Basic Research Program of China(2012CB316304) ; Strategic Priority Research Program of the CAS(XDB02060009) ; National Natural Science Foundation of China(61272331 ; Beijing Natural Science Foundation(4162064) ; 91338202)
WOS研究方向Computer Science ; Telecommunications
WOS类目Computer Science, Information Systems ; Computer Science, Software Engineering ; Telecommunications
WOS记录号WOS:000379752600012
引用统计
被引频次:91[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/11656
专题多模态人工智能系统全国重点实验室_先进时空数据分析与学习
通讯作者Xiang,Shiming
作者单位1.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
2.Chinese Acad Sci, Inst Informat Engn, Beijing 100093, Peoples R China
第一作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
He, Yonghao,Xiang, Shiming,Kang, Cuicui,et al. Cross-Modal Retrieval via Deep and Bidirectional Representation Learning[J]. IEEE TRANSACTIONS ON MULTIMEDIA,2016,18(7):1363-1377.
APA He, Yonghao,Xiang, Shiming,Kang, Cuicui,Wang, Jian,Pan, Chunhong,&Xiang,Shiming.(2016).Cross-Modal Retrieval via Deep and Bidirectional Representation Learning.IEEE TRANSACTIONS ON MULTIMEDIA,18(7),1363-1377.
MLA He, Yonghao,et al."Cross-Modal Retrieval via Deep and Bidirectional Representation Learning".IEEE TRANSACTIONS ON MULTIMEDIA 18.7(2016):1363-1377.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
TMM2558463_final.pdf(11388KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[He, Yonghao]的文章
[Xiang, Shiming]的文章
[Kang, Cuicui]的文章
百度学术
百度学术中相似的文章
[He, Yonghao]的文章
[Xiang, Shiming]的文章
[Kang, Cuicui]的文章
必应学术
必应学术中相似的文章
[He, Yonghao]的文章
[Xiang, Shiming]的文章
[Kang, Cuicui]的文章
相关权益政策
暂无数据
收藏/分享
文件名: TMM2558463_final.pdf
格式: Adobe PDF
此文件暂不支持浏览
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。