CASIA OpenIR  > 智能感知与计算研究中心
Cross-modal subspace learning for fine-grained sketch-based image retrieval
Xu, Peng1; Yin, Qiyue2; Huang, Yongye1; Song, Yi-Zhe3; Ma, Zhanyu1; Wang, Liang2; Xiang, Tao3; Kleijn, W. Bastiaan4; Guo, Jun1
Source PublicationNEUROCOMPUTING
AbstractSketch-based image retrieval (SBIR) is challenging due to the inherent domain-gap between sketch and photo. Compared with pixel-perfect depictions of photos, sketches are iconic renderings of the real world with highly abstract. Therefore, matching sketch and photo directly using low-level visual clues are insufficient, since a common low-level subspace that traverses semantically across the two modalities is non-trivial to establish. Most existing SBIR studies do not directly tackle this cross-modal problem. This naturally motivates us to explore the effectiveness of cross-modal retrieval methods in SBIR, which have been applied in the image-text matching successfully. In this paper, we introduce and compare a series of state-of-the-art cross-modal subspace learning methods and benchmark them on two recently released fine-grained SBIR datasets. Through thorough examination of the experimental results, we have demonstrated that the subspace learning can effectively model the sketch-photo domain-gap. In addition we draw a few key insights to drive future research. (C) 2017 Elsevier B.V. All rights reserved.
KeywordCross-modal Subspace Learning Sketch-based Image Retrieval Fine-grained
WOS HeadingsScience & Technology ; Technology
Indexed BySCI
Funding OrganizationNational Natural Science Foundation of China (NSFC)(61773071) ; Beijing Natural Science Foundation (BNSF) grant(4162044) ; Beijing Nova Program Grant(Z171100001117049) ; Open Project Program of National Laboratory of Pattern Recognition grant(201600018) ; Chinese 111 program of Advanced Intelligence and Network Service Grant(B08004) ; BUPT-SICE Excellent Graduate Student Innovation Foundation
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000423965000009
Citation statistics
Cited Times:11[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Affiliation1.Beijing Univ Posts & Telecommun, Pattern Recognit & Intelligent Syst Lab, Beijing, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
3.Queen Mary Univ London, Sch Elect Engn & Comp Sci, SketchX Lab, London, England
4.Victoria Univ Wellington, Commun & Signal Proc Grp, Wellington, New Zealand
Recommended Citation
GB/T 7714
Xu, Peng,Yin, Qiyue,Huang, Yongye,et al. Cross-modal subspace learning for fine-grained sketch-based image retrieval[J]. NEUROCOMPUTING,2018,278:75-86.
APA Xu, Peng.,Yin, Qiyue.,Huang, Yongye.,Song, Yi-Zhe.,Ma, Zhanyu.,...&Guo, Jun.(2018).Cross-modal subspace learning for fine-grained sketch-based image retrieval.NEUROCOMPUTING,278,75-86.
MLA Xu, Peng,et al."Cross-modal subspace learning for fine-grained sketch-based image retrieval".NEUROCOMPUTING 278(2018):75-86.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xu, Peng]'s Articles
[Yin, Qiyue]'s Articles
[Huang, Yongye]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xu, Peng]'s Articles
[Yin, Qiyue]'s Articles
[Huang, Yongye]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xu, Peng]'s Articles
[Yin, Qiyue]'s Articles
[Huang, Yongye]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.