CASIA OpenIR  > 智能感知与计算研究中心
Cross-modal subspace learning for fine-grained sketch-based image retrieval
Xu, Peng1; Yin, Qiyue2; Huang, Yongye1; Song, Yi-Zhe3; Ma, Zhanyu1; Wang, Liang2; Xiang, Tao3; Kleijn, W. Bastiaan4; Guo, Jun1
2018-02-22
发表期刊NEUROCOMPUTING
卷号278页码:75-86
文章类型Article
摘要Sketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are insufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research. (C) 2017 Elsevier B.V. All rights reserved.
关键词Cross-modal Subspace Learning Sketch-based Image Retrieval Fine-grained
WOS标题词Science & Technology ; Technology
DOI10.1016/j.neucom.2017.05.099
关键词[WOS]PARTIAL-LEAST-SQUARES ; FACE RECOGNITION ; TAGS
收录类别SCI
语种英语
项目资助者National Natural Science Foundation of China (NSFC)(61773071) ; Beijing Natural Science Foundation (BNSF) grant(4162044) ; Beijing Nova Program Grant(Z171100001117049) ; Open Project Program of National Laboratory of Pattern Recognition grant(201600018) ; Chinese 111 program of Advanced Intelligence and Network Service Grant(B08004) ; BUPT-SICE Excellent Graduate Student Innovation Foundation
WOS研究方向Computer Science
WOS类目Computer Science, Artificial Intelligence
WOS记录号WOS:000423965000009
引用统计
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/21943
专题智能感知与计算研究中心
作者单位1.Beijing Univ Posts & Telecommun, Pattern Recognit & Intelligent Syst Lab, Beijing, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
3.Queen Mary Univ London, Sch Elect Engn & Comp Sci, SketchX Lab, London, England
4.Victoria Univ Wellington, Commun & Signal Proc Grp, Wellington, New Zealand
推荐引用方式
GB/T 7714
Xu, Peng,Yin, Qiyue,Huang, Yongye,et al. Cross-modal subspace learning for fine-grained sketch-based image retrieval[J]. NEUROCOMPUTING,2018,278:75-86.
APA Xu, Peng.,Yin, Qiyue.,Huang, Yongye.,Song, Yi-Zhe.,Ma, Zhanyu.,...&Guo, Jun.(2018).Cross-modal subspace learning for fine-grained sketch-based image retrieval.NEUROCOMPUTING,278,75-86.
MLA Xu, Peng,et al."Cross-modal subspace learning for fine-grained sketch-based image retrieval".NEUROCOMPUTING 278(2018):75-86.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xu, Peng]的文章
[Yin, Qiyue]的文章
[Huang, Yongye]的文章
百度学术
百度学术中相似的文章
[Xu, Peng]的文章
[Yin, Qiyue]的文章
[Huang, Yongye]的文章
必应学术
必应学术中相似的文章
[Xu, Peng]的文章
[Yin, Qiyue]的文章
[Huang, Yongye]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。