CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
SubMIL: Discriminative subspaces for multi-instance learning
Yuan, Jiazheng1,2; Huang, Xiankai3; Liu, Hongzhe1; Li, Bing4; Xiong, Weihua4
Source PublicationNEUROCOMPUTING
2016-01-15
Volume173Pages:1768-1774
SubtypeArticle
AbstractAs an important learning scheme for Multi-Instance Learning (MIL), the Instance Prototype (IP) selection-based MIL algorithms transform bags into a new instance feature space and achieve impressed classification performance. However, the number of IPs in the existing algorithms linearly increases with the scale of the training data. The performance and efficiencies of these algorithms are easily limited by the high dimension and noise when facing a large scale of training data. This paper proposes a discriminative subspaces-based instance prototype selection method that is suitable for reducing the computation complexity for large scale training data. In the proposed algorithm, we introduce the low-rank matrix recovery technique to find two discriminative and clean subspaces with less noise; then present a l(2,1) norm-based self-expressive sparse coding model to select the most representative instances in each subspace. Experimental results on several data sets show that our algorithm achieves superior and stable performance but with lower dimension compared with other IP selection strategies. (C) 2015 Elsevier B.V. All rights reserved.
KeywordMulti-instance Learning Low Rank Subspace
WOS HeadingsScience & Technology ; Technology
DOI10.1016/j.neucom.2015.08.089
WOS KeywordALGORITHM ; SELECTION
Indexed BySCI
Language英语
Funding OrganizationNational Natural Science Foundation of China(61271369 ; Beijing Natural Science Foundation(4152016 ; National Key Technology RD Program(2014BAK08B02 ; Funding Project for Academic Human Resources Development in Beijing Union University(BPHR2014A04 ; Project of Construction of Innovative Teams and Teacher Career Development for Universities and Colleges under Beijing Municipality(CITTCD 20130513 ; 61372148 ; 4152018) ; 2015BAH55F03) ; BPHR2014E02) ; IDHT 20140508) ; 61370038)
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000366879800129
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/10645
Collection模式识别国家重点实验室_模式分析与学习
Affiliation1.Beijing Key Lab Informat Serv Engn, Beijing 100101, Peoples R China
2.Beijing Union Univ, Comp Technol Inst, Beijing 100101, Peoples R China
3.Beijing Union Univ, Tourism Inst, Beijing 100101, Peoples R China
4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Yuan, Jiazheng,Huang, Xiankai,Liu, Hongzhe,et al. SubMIL: Discriminative subspaces for multi-instance learning[J]. NEUROCOMPUTING,2016,173:1768-1774.
APA Yuan, Jiazheng,Huang, Xiankai,Liu, Hongzhe,Li, Bing,&Xiong, Weihua.(2016).SubMIL: Discriminative subspaces for multi-instance learning.NEUROCOMPUTING,173,1768-1774.
MLA Yuan, Jiazheng,et al."SubMIL: Discriminative subspaces for multi-instance learning".NEUROCOMPUTING 173(2016):1768-1774.
Files in This Item: Download All
File Name/Size DocType Version Access License
Discriminative subsp(548KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yuan, Jiazheng]'s Articles
[Huang, Xiankai]'s Articles
[Liu, Hongzhe]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yuan, Jiazheng]'s Articles
[Huang, Xiankai]'s Articles
[Liu, Hongzhe]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yuan, Jiazheng]'s Articles
[Huang, Xiankai]'s Articles
[Liu, Hongzhe]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Discriminative subspaces for multi-instance learning.pdf
Format: Adobe PDF
This file does not support browsing at this time
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.