CASIA OpenIR
LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation
Xu, Ting-Bing1,2; Yang, Peipei1; Zhang, Xu-Yao1,2; Liu, Cheng-Lin1,2,3
Source PublicationPATTERN RECOGNITION
ISSN0031-3203
2019-04-01
Volume88Pages:272-284
Corresponding AuthorLiu, Cheng-Lin(liucl@nlpr.ia.ac.cn)
AbstractIn recent years, deep neural networks have achieved remarkable successes in many pattern recognition tasks. However, the high computational cost and large memory overhead hinder them from applications on resource-limited devices. To address this problem, many deep network acceleration and compression methods have been proposed. One group of methods adopt decomposition and pruning techniques to accelerate and compress a pre-trained model. Another group designs single compact unit to stack their own networks. These methods are subject to complicated training processes, or lack of generality and extensibility. In this paper, we propose a general framework of architecture distillation, namely LightweightNet, to accelerate and compress convolutional neural networks. Rather than compressing a pre-trained model, we directly construct the lightweight network based on a baseline network architecture. The Lightweight Net, designed based on a comprehensive analysis of the network architecture, consists of network parameter compression, network structure acceleration, and non-tensor layer improvement. Specifically, we propose the strategy of low-dimensional features of fully-connected layers for substantial memory saving, and design multiple efficient compact blocks to distill convolutional layers of baseline network with accuracy-sensitive distillation rule for notable time saving. Finally, it can effectively reduce the computational cost and the model size by > 4x with negligible accuracy loss. Benchmarks on MNIST, CIFAR-10, ImageNet and HCCR (handwritten Chinese character recognition) datasets demonstrate the advantages of the proposed framework in terms of speed, performance, storage and training process. In HCCR, our method even outperforms traditional handcrafted features-based classifiers in terms of speed and storage while maintaining state-of-the-art recognition performance. (C) 2018 Elsevier Ltd. All rights reserved.
KeywordDeep network acceleration and compression Architecture distillation Lightweight network
DOI10.1016/j.patcog.2018.10.029
WOS KeywordFEATURE-EXTRACTION ; CHARACTER ; RECOGNITION ; NORMALIZATION ; ONLINE
Indexed BySCI
Language英语
Funding ProjectNational Natural Science Foundation of China (NSFC)[61721004] ; National Natural Science Foundation of China (NSFC)[61633021]
Funding OrganizationNational Natural Science Foundation of China (NSFC)
WOS Research AreaComputer Science ; Engineering
WOS SubjectComputer Science, Artificial Intelligence ; Engineering, Electrical & Electronic
WOS IDWOS:000457666900021
PublisherELSEVIER SCI LTD
Citation statistics
Cited Times:2[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/25266
Collection中国科学院自动化研究所
Corresponding AuthorLiu, Cheng-Lin
Affiliation1.Chinese Acad Sci, Inst Automat, NLPR, Beijing 100190, Peoples R China
2.UCAS, Sch Artificial Intelligence, Beijing 100190, Peoples R China
3.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,et al. LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation[J]. PATTERN RECOGNITION,2019,88:272-284.
APA Xu, Ting-Bing,Yang, Peipei,Zhang, Xu-Yao,&Liu, Cheng-Lin.(2019).LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation.PATTERN RECOGNITION,88,272-284.
MLA Xu, Ting-Bing,et al."LightweightNet: Toward fast and lightweight convolutional neural networks via architecture distillation".PATTERN RECOGNITION 88(2019):272-284.
Files in This Item:
There are no files associated with this item.
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xu, Ting-Bing]'s Articles
[Yang, Peipei]'s Articles
[Zhang, Xu-Yao]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xu, Ting-Bing]'s Articles
[Yang, Peipei]'s Articles
[Zhang, Xu-Yao]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xu, Ting-Bing]'s Articles
[Yang, Peipei]'s Articles
[Zhang, Xu-Yao]'s Articles
Terms of Use
No data!
Social Bookmark/Share
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.