Institutional Repository of Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation | |
Shen, Yuan-Yuan![]() ![]() | |
Source Publication | COGNITIVE COMPUTATION
![]() |
2018-04-01 | |
Volume | 10Issue:2Pages:334-346 |
Subtype | Article |
Abstract | Incremental learning enables continuous model adaptation based on a constantly arriving data stream. It is a way relevant to human cognitive system, which learns to predict objects in a changing world. Incremental learning for character recognition is a typical scenario that characters appear sequentially and the font/writing style changes irregularly. In the paper, we investigate how to classify characters incrementally (i.e., input patterns appear once at a time). A reasonable assumption is that adjacent characters from the same font or the same writer share the same style in a short period while style variation occurs in characters printed by different fonts or written by different persons during a long period. The challenging issue here is how to take advantage of the local style consistency and adapt to the continuous style variation as well incrementally. For this purpose, we propose a continuous incremental adaptive learning vector quantization (CIALVQ) method, which incrementally learns a self-adaptive style transfer matrix for mapping input patterns from style-conscious space onto style-free space. After style transformation, this problem is casted into a common character recognition task and an incremental learning vector quantization (ILVQ) classifier is used. In this framework, we consider two learning modes: supervised incremental learning and active incremental learning. In the latter mode, samples receiving low confidence from the classifier are requested class labels. We evaluated the classification performance of CIALVQ in two scenarios, interleaved test-then-train and style-specific classification on NIST hand-printed data sets. The results show that local style consistency improves the accuracies of both two test scenarios, and for both supervised and active incremental learning modes. |
Keyword | Continuous Incremental Adaptive Learning Vector Quantization Style Transfer Mapping Local Style Consistency Active Learning |
WOS Headings | Science & Technology ; Technology ; Life Sciences & Biomedicine |
DOI | 10.1007/s12559-017-9491-3 |
WOS Keyword | ONLINE ; PERCEPTRON ; ALGORITHM |
Indexed By | SCI |
Language | 英语 |
Funding Organization | Strategic Priority Research Program of the CAS Grant(XDB02060009) ; National Natural Science Foundation of China (NSFC)(61411136002) |
WOS Research Area | Computer Science ; Neurosciences & Neurology |
WOS Subject | Computer Science, Artificial Intelligence ; Neurosciences |
WOS ID | WOS:000430190600013 |
Citation statistics | |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/22004 |
Collection | 模式识别国家重点实验室_模式分析与学习 |
Affiliation | Chinese Acad Sci, Univ Chinese Acad Sci, Inst Automat, Natl Lab Pattern Receognit, Beijing 100190, Peoples R China |
First Author Affilication | Institute of Automation, Chinese Academy of Sciences |
Recommended Citation GB/T 7714 | Shen, Yuan-Yuan,Liu, Cheng-Lin. Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation[J]. COGNITIVE COMPUTATION,2018,10(2):334-346. |
APA | Shen, Yuan-Yuan,&Liu, Cheng-Lin.(2018).Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation.COGNITIVE COMPUTATION,10(2),334-346. |
MLA | Shen, Yuan-Yuan,et al."Incremental Adaptive Learning Vector Quantization for Character Recognition with Continuous Style Adaptation".COGNITIVE COMPUTATION 10.2(2018):334-346. |
Files in This Item: | Download All | |||||
File Name/Size | DocType | Version | Access | License | ||
Incremental Adaptive(1575KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment