CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
Towards open-set text recognition via label-to-prototype learning
Liu, Chang1; Yang, Chun1; Qin, Hai-Bo1; Zhu, Xiaobin1; Liu, Cheng-Lin2; Yin, Xu-Cheng1
Source PublicationPATTERN RECOGNITION
ISSN0031-3203
2023-02-01
Volume134Pages:13
Corresponding AuthorYang, Chun(chunyang@ustb.edu.cn) ; Yin, Xu-Cheng(xuchengyin@ustb.edu.cn)
AbstractScene text recognition is a popular research topic which is also extensively utilized in the industry. Al-though many methods have achieved satisfactory performance for the close-set text recognition chal-lenges, these methods lose feasibility in open-set scenarios, where collecting data or retraining models for novel characters could yield a high cost. For example, annotating samples for foreign languages can be expensive, whereas retraining the model each time when a "novel" character is discovered from historical documents costs both time and resources. In this paper, we introduce and formulate a new open-set text recognition task which demands the capability to spot and recognize novel characters without retrain-ing. A label-to-prototype learning framework is also proposed as a baseline for the new task. Specifically, the framework introduces a generalizable label-to-prototype mapping function to build prototypes (class centers) for both seen and unseen classes. An open-set predictor is then utilized to recognize or reject samples according to the prototypes. The implementation of rejection capability over out-of-set charac-ters allows automatic spotting of unknown characters in the incoming data stream. Extensive experiments show that our method achieves promising performance on a variety of zero-shot, close-set, and open-set text recognition datasets. (c) 2022 Elsevier Ltd. All rights reserved.
KeywordOpen-set recognition Scene text recognition Low-shot recognition
DOI10.1016/j.patcog.2022.109109
WOS KeywordNETWORK ; CLASSIFICATION
Indexed BySCI
Language英语
Funding ProjectNational Key Research and Development Program of China ; National Science Fund for Distinguished Young Scholars ; National Natural Science Foundation of China ; [2020AAA09701] ; [62125601] ; [62006018] ; [62076024]
Funding OrganizationNational Key Research and Development Program of China ; National Science Fund for Distinguished Young Scholars ; National Natural Science Foundation of China
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:000880031000003
PublisherELSEVIER SCI LTD
Citation statistics
Cited Times:1[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50700
Collection模式识别国家重点实验室_模式分析与学习
Corresponding AuthorYang, Chun; Yin, Xu-Cheng
Affiliation1.Univ Sci & Technol Beijing, Sch Comp & Commun Engn, Dept Comp Sci & Technol, Beijing 100083, Peoples R China
2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Liu, Chang,Yang, Chun,Qin, Hai-Bo,et al. Towards open-set text recognition via label-to-prototype learning[J]. PATTERN RECOGNITION,2023,134:13.
APA Liu, Chang,Yang, Chun,Qin, Hai-Bo,Zhu, Xiaobin,Liu, Cheng-Lin,&Yin, Xu-Cheng.(2023).Towards open-set text recognition via label-to-prototype learning.PATTERN RECOGNITION,134,13.
MLA Liu, Chang,et al."Towards open-set text recognition via label-to-prototype learning".PATTERN RECOGNITION 134(2023):13.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Liu, Chang]'s Articles
[Yang, Chun]'s Articles
[Qin, Hai-Bo]'s Articles
Baidu academic
Similar articles in Baidu academic
[Liu, Chang]'s Articles
[Yang, Chun]'s Articles
[Qin, Hai-Bo]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Liu, Chang]'s Articles
[Yang, Chun]'s Articles
[Qin, Hai-Bo]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.