CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
Towards prior gap and representation gap for long-tailed recognition
Zhang, Ming-Liang1,2; Zhang, Xu-Yao1,2; Wang, Chuang1,2; Liu, Cheng-Lin1,2
Source PublicationPATTERN RECOGNITION
ISSN0031-3203
2023
Volume133Pages:12
Corresponding AuthorZhang, Ming-Liang(zhangmingliang2018@ia.ac.cn)
AbstractMost deep learning models are elaborately designed for balanced datasets, and thus they inevitably suf-fer performance degradation in practical long-tailed recognition tasks, especially to the minority classes. There are two crucial issues in learning from imbalanced datasets: skew decision boundary and unrep-resentative feature space. In this work, we establish a theoretical framework to analyze the sources of these two issues from Bayesian perspective, and find that they are closely related to the prior gap and the representation gap, respectively. Under this framework, we show that existing long-tailed recognition methods manage to remove either the prior gap or the presentation gap. Different from these methods, we propose to simultaneously remove the two gaps to achieve more accurate long-tailed recognition. Specifically, we propose the prior calibration strategy to remove the prior gap and introduce three strate-gies (representative feature extraction, optimization strategy adjustment and effective sample modeling) to mitigate the representation gap. Extensive experiments on five benchmark datasets validate the supe-riority of our method against the state-of-the-art competitors.(c) 2022 Elsevier Ltd. All rights reserved.
KeywordLong-tailed learning Prior gap Representation gap Image recognition
DOI10.1016/j.patcog.2022.109012
WOS KeywordNEURAL-NETWORK CLASSIFICATION
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program[2018AAA010 040 0] ; National Natural Science Foundation of China (NSFC)[U20A20223] ; National Natural Science Foundation of China (NSFC)[62076236] ; National Natural Science Foundation of China (NSFC)[61721004]
Funding OrganizationNational Key Research and Development Program ; National Natural Science Foundation of China (NSFC)
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:000863094500012
PublisherELSEVIER SCI LTD
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50309
Collection模式识别国家重点实验室_模式分析与学习
Corresponding AuthorZhang, Ming-Liang
Affiliation1.Chinese Acad Sci, Natl Lab Pattern Recognit NLPR, Inst Automat, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Zhang, Ming-Liang,Zhang, Xu-Yao,Wang, Chuang,et al. Towards prior gap and representation gap for long-tailed recognition[J]. PATTERN RECOGNITION,2023,133:12.
APA Zhang, Ming-Liang,Zhang, Xu-Yao,Wang, Chuang,&Liu, Cheng-Lin.(2023).Towards prior gap and representation gap for long-tailed recognition.PATTERN RECOGNITION,133,12.
MLA Zhang, Ming-Liang,et al."Towards prior gap and representation gap for long-tailed recognition".PATTERN RECOGNITION 133(2023):12.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Zhang, Ming-Liang]'s Articles
[Zhang, Xu-Yao]'s Articles
[Wang, Chuang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Zhang, Ming-Liang]'s Articles
[Zhang, Xu-Yao]'s Articles
[Wang, Chuang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Zhang, Ming-Liang]'s Articles
[Zhang, Xu-Yao]'s Articles
[Wang, Chuang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.