CASIA OpenIR  > 学术期刊  > Machine Intelligence Research
Pruning-aware Sparse Regularization for Network Pruning
Nan-Fei Jiang1,2
Source PublicationMachine Intelligence Research
ISSN2731-538X
2023
Volume20Issue:1Pages:109-120
AbstractStructural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks (CNNs) by pruning the filters of less importance to the final output accuracy. To reduce the degradation of performance after pruning, many methods utilize the loss with sparse regularization to produce structured sparsity. In this paper, we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary. Moreover, it restricts the network′s capacity, which leads to under-fitting. To solve this problem, we propose a novel pruning method, named MaskSparsity, with pruning-aware sparse regularization. MaskSparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask, rather than all the filters of the model. Before the fine-grained sparse regularization of MaskSparity, we can use many methods to get the pruning mask, such as running the global sparse regularization. MaskSparsity achieves a 63.03% float point operations (FLOPs) reduction on ResNet-110 by removing 60.34% of the parameters, with no top-1 accuracy loss on CIFAR-10. On ILSVRC-2012, MaskSparsity reduces more than 51.07% FLOPs on ResNet-50, with only a loss of 0.76% in the top-1 accuracy. The code of this paper is released at https://github.com/CASIA-IVA-Lab/MaskSparsity. We have also integrated the code into a self-developed PyTorch pruning toolkit, named EasyPruner, at https://gitee.com/casia_iva_engineer/easypruner.
KeywordDeep learning convolutional neural network (CNN) model compression and acceleration network pruning regularization
DOI10.1007/s11633-022-1353-0
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50903
Collection学术期刊_Machine Intelligence Research
Affiliation1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
2.Institute of Automation, Chinese Academy of Sciences, Beijing 100049, China
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Nan-Fei Jiang. Pruning-aware Sparse Regularization for Network Pruning[J]. Machine Intelligence Research,2023,20(1):109-120.
APA Nan-Fei Jiang.(2023).Pruning-aware Sparse Regularization for Network Pruning.Machine Intelligence Research,20(1),109-120.
MLA Nan-Fei Jiang."Pruning-aware Sparse Regularization for Network Pruning".Machine Intelligence Research 20.1(2023):109-120.
Files in This Item: Download All
File Name/Size DocType Version Access License
MIR-2022-03-097.pdf(1665KB)期刊论文出版稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Nan-Fei Jiang]'s Articles
Baidu academic
Similar articles in Baidu academic
[Nan-Fei Jiang]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Nan-Fei Jiang]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: MIR-2022-03-097.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.