Pruning-aware Sparse Regularization for Network Pruning
Jiang NF(江南飞); Zhao X(赵旭); Zhao CY(赵朝阳); An YQ(安永琪); Tang M(唐明); Wang JQ(王金桥)
发表期刊Machine Intelligence Research
2023-02
卷号20页码:pages109–120
摘要

Structural neural network pruning aims to remove the redundant channels in the deep convolutional neural networks (CNNs) by pruning the filters of less importance to the final output accuracy. To reduce the degradation of performance after pruning, many methods utilize the loss with sparse regularization to produce structured sparsity. In this paper, we analyze these sparsity-training-based methods and find that the regularization of unpruned channels is unnecessary. Moreover, it restricts the network’s capacity, which leads to under-fitting. To solve this problem, we propose a novel pruning method, named MaskSparsity, with pruning-aware sparse regularization. MaskSparsity imposes the fine-grained sparse regularization on the specific filters selected by a pruning mask, rather than all the filters of the model. Before the fine-grained sparse regularization of MaskSparity, we can use many methods to get the pruning mask, such as running the global sparse regularization. MaskSparsity achieves a 63.03% float point operations (FLOPs) reduction on ResNet-110 by removing 60.34% of the parameters, with no top-1 accuracy loss on CIFAR-10. On ILSVRC-2012, MaskSparsity reduces more than 51.07% FLOPs on ResNet-50, with only a loss of 0.76% in the top-1 accuracy. The code of this paper is released at https://github.com/CASIA-IVA-Lab/MaskSparsity. We have also integrated the code into a self-developed PyTorch pruning toolkit, named EasyPruner, at https://gitee.com/casia_iva_engineer/easypruner.

收录类别SCIE
七大方向——子方向分类计算智能
国重实验室规划方向分类可解释人工智能
是否有论文关联数据集需要存交
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/51512
专题紫东太初大模型研究中心_图像与视频分析
紫东太初大模型研究中心
作者单位1.中国科学院自动化研究所
2.中国科学院大学
推荐引用方式
GB/T 7714
Jiang NF,Zhao X,Zhao CY,et al. Pruning-aware Sparse Regularization for Network Pruning[J]. Machine Intelligence Research,2023,20:pages109–120.
APA Jiang NF,Zhao X,Zhao CY,An YQ,Tang M,&Wang JQ.(2023).Pruning-aware Sparse Regularization for Network Pruning.Machine Intelligence Research,20,pages109–120.
MLA Jiang NF,et al."Pruning-aware Sparse Regularization for Network Pruning".Machine Intelligence Research 20(2023):pages109–120.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
s11633-022-1353-0.pd(1180KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Jiang NF(江南飞)]的文章
[Zhao X(赵旭)]的文章
[Zhao CY(赵朝阳)]的文章
百度学术
百度学术中相似的文章
[Jiang NF(江南飞)]的文章
[Zhao X(赵旭)]的文章
[Zhao CY(赵朝阳)]的文章
必应学术
必应学术中相似的文章
[Jiang NF(江南飞)]的文章
[Zhao X(赵旭)]的文章
[Zhao CY(赵朝阳)]的文章
相关权益政策
暂无数据
收藏/分享
文件名: s11633-022-1353-0.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。