CASIA OpenIR  > 模式识别国家重点实验室  > 生物识别与安全技术
Prior-knowledge and attention based meta-learning for few-shot learning
Qin, Yunxiao1; Zhang, Weiguo1; Zhao, Chenxu2; Wang, Zezheng3; Zhu, Xiangyu4; Shi, Jingping1; Qi, Guojun5; Lei, Zhen4,6
Source PublicationKNOWLEDGE-BASED SYSTEMS
ISSN0950-7051
2021-02-15
Volume213Pages:12
Corresponding AuthorQin, Yunxiao(qyxqyx@mail.nwpu.edu.cn)
AbstractRecently, meta-learning has been shown to be a promising way to solve few-shot learning. In this paper, inspired by the human cognition process, which utilizes both prior-knowledge and visual attention when learning new knowledge, we present a novel paradigm of meta-learning approach that capitalizes on three developments to introduce attention mechanism and prior-knowledge to meta-learning. In our approach, prior-knowledge is responsible for helping the meta-learner express the input data in a high-level representation space, and the attention mechanism enables the meta-learner to focus on key data features in the representation space. Compared with the existing meta-learning approaches that pay little attention to prior-knowledge and visual attention, our approach alleviates the meta-learner's few-shot cognition burden. Furthermore, we discover a Task-Over-Fitting (TOF) problem,(1) which indicates that the meta-learner has poor generalization across different K-shot learning tasks. To model the TOF problem, we propose a novel Cross-Entropy across Tasks (CET) metric.(2) Extensive experiments demonstrate that our techniques improve the meta-learner to state-of-the-art performance on several few-shot learning benchmarks while also substantially alleviating the TOF problem. (C) 2020 Elsevier B.V. All rights reserved.
KeywordMeta-learning Few-shot learning Prior-knowledge Representation Attention mechanism
DOI10.1016/j.knosys.2020.106609
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program of China[2020YFC2003901] ; National Natural Science Foundation of China[61573286] ; National Natural Science Foundation of China[61876178] ; National Natural Science Foundation of China[61976229]
Funding OrganizationNational Key Research and Development Program of China ; National Natural Science Foundation of China
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000614644100011
PublisherELSEVIER
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/43351
Collection模式识别国家重点实验室_生物识别与安全技术
Corresponding AuthorQin, Yunxiao
Affiliation1.Northwestern Polytech Univ, Xian 710129, Peoples R China
2.MiningLamp Technol, Beijing 100094, Peoples R China
3.Beijing Kwai Technol, Beijing 102600, Peoples R China
4.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit NLPR, Beijing 100000, Peoples R China
5.Huawei Cloud, Seattle, WA 90876 USA
6.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
Recommended Citation
GB/T 7714
Qin, Yunxiao,Zhang, Weiguo,Zhao, Chenxu,et al. Prior-knowledge and attention based meta-learning for few-shot learning[J]. KNOWLEDGE-BASED SYSTEMS,2021,213:12.
APA Qin, Yunxiao.,Zhang, Weiguo.,Zhao, Chenxu.,Wang, Zezheng.,Zhu, Xiangyu.,...&Lei, Zhen.(2021).Prior-knowledge and attention based meta-learning for few-shot learning.KNOWLEDGE-BASED SYSTEMS,213,12.
MLA Qin, Yunxiao,et al."Prior-knowledge and attention based meta-learning for few-shot learning".KNOWLEDGE-BASED SYSTEMS 213(2021):12.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Qin, Yunxiao]'s Articles
[Zhang, Weiguo]'s Articles
[Zhao, Chenxu]'s Articles
Baidu academic
Similar articles in Baidu academic
[Qin, Yunxiao]'s Articles
[Zhang, Weiguo]'s Articles
[Zhao, Chenxu]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Qin, Yunxiao]'s Articles
[Zhang, Weiguo]'s Articles
[Zhao, Chenxu]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.