Knowledge Commons of Institute of Automation,CAS
Pruning-aware Sparse Regularization for Network Pruning | |
Nan-Fei Jiang1,2 | |
Source Publication | Machine Intelligence Research
![]() |
ISSN | 2731-538X |
2023 | |
Volume | 20Issue:1Pages:109-120 |
Abstract | Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks (CNNs) by pruning the filters of less importance to the final output accuracy. To reduce the degradation of performance after pruning, many methods utilize the loss with sparse regularization to produce structured sparsity. In this paper, we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary. Moreover, it restricts the network′s capacity, which leads to under-fitting. To solve this problem, we propose a novel pruning method, named MaskSparsity, with pruning-aware sparse regularization. MaskSparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask, rather than all the filters of the model. Before the fine-grained sparse regularization of MaskSparity, we can use many methods to get the pruning mask, such as running the global sparse regularization. MaskSparsity achieves a 63.03% float point operations (FLOPs) reduction on ResNet-110 by removing 60.34% of the parameters, with no top-1 accuracy loss on CIFAR-10. On ILSVRC-2012, MaskSparsity reduces more than 51.07% FLOPs on ResNet-50, with only a loss of 0.76% in the top-1 accuracy. The code of this paper is released at https://github.com/CASIA-IVA-Lab/MaskSparsity. We have also integrated the code into a self-developed PyTorch pruning toolkit, named EasyPruner, at https://gitee.com/casia_iva_engineer/easypruner. |
Keyword | Deep learning convolutional neural network (CNN) model compression and acceleration network pruning regularization |
DOI | 10.1007/s11633-022-1353-0 |
Citation statistics | |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/50903 |
Collection | 学术期刊_Machine Intelligence Research |
Affiliation | 1.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China 2.Institute of Automation, Chinese Academy of Sciences, Beijing 100049, China |
First Author Affilication | Institute of Automation, Chinese Academy of Sciences |
Recommended Citation GB/T 7714 | Nan-Fei Jiang. Pruning-aware Sparse Regularization for Network Pruning[J]. Machine Intelligence Research,2023,20(1):109-120. |
APA | Nan-Fei Jiang.(2023).Pruning-aware Sparse Regularization for Network Pruning.Machine Intelligence Research,20(1),109-120. |
MLA | Nan-Fei Jiang."Pruning-aware Sparse Regularization for Network Pruning".Machine Intelligence Research 20.1(2023):109-120. |
Files in This Item: | Download All | |||||
File Name/Size | DocType | Version | Access | License | ||
MIR-2022-03-097.pdf(1665KB) | 期刊论文 | 出版稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment