CASIA OpenIR  > 模式识别国家重点实验室  > 视频内容安全
DPFPS: Dynamic and Progressive Filter Pruning for Compressing Convolutional Neural Networks from Scratch
Ruan, Xiaofeng1,2; Liu, Yufan1,2; Li, Bing1,4; Yuan, Chunfeng1; Hu, Weiming1,2,3
2021-05-18
Conference NameThe Thirty-Fifth AAAI Conference on Artificial Intelligence
Conference Date2021.2.2-2021.2.9
Conference Placevirtual conference
Abstract

Filter pruning is a commonly used method for compressing Convolutional Neural Networks (ConvNets), due to its friendly hardware supporting and flexibility. However, existing methods mostly need a cumbersome procedure, which brings many extra hyper-parameters and training epochs. This is because only using sparsity and pruning stages cannot obtain a satisfying performance. Besides, many works do not consider the difference of pruning ratio across different layers. To overcome these limitations, we propose a novel dynamic and progressive filter pruning (DPFPS) scheme that directly learns a structured sparsity network from Scratch. In particular, DPFPS imposes a new structured sparsityinducing regularization specifically upon the expected pruning parameters in a dynamic sparsity manner. The dynamic sparsity scheme determines sparsity allocation ratios of different layers and a Taylor series based channel sensitivity criteria is presented to identify the expected pruning parameters. Moreover, we increase the structured sparsity-inducing penalty in a progressive manner. This helps the model to be sparse gradually instead of forcing the model to be sparse at the beginning. Our method solves the pruning ratio based optimization problem by an iterative soft-thresholding algorithm (ISTA) with dynamic sparsity. At the end of the training, we only need to remove the redundant parameters without other stages, such as fine-tuning. Extensive experimental results show that the proposed method is competitive with 11 state-of-the-art methods on both small-scale and large-scale datasets (i.e., CIFAR and ImageNet). Specifically, on ImageNet, we achieve a 44.97% pruning ratio of FLOPs by compressing ResNet-101, even with an increase of 0.12% Top-5 accuracy. Our pruned models and codes are released at https://github.com/taoxvzi/DPFPS.

Indexed ByEI
Language英语
Sub direction classification机器学习
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/44803
Collection模式识别国家重点实验室_视频内容安全
Corresponding AuthorLi, Bing
Affiliation1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology
4.PeopleAI Inc.
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Ruan, Xiaofeng,Liu, Yufan,Li, Bing,et al. DPFPS: Dynamic and Progressive Filter Pruning for Compressing Convolutional Neural Networks from Scratch[C],2021.
Files in This Item: Download All
File Name/Size DocType Version Access License
DPFPS_Dynamic and Pr(652KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Ruan, Xiaofeng]'s Articles
[Liu, Yufan]'s Articles
[Li, Bing]'s Articles
Baidu academic
Similar articles in Baidu academic
[Ruan, Xiaofeng]'s Articles
[Liu, Yufan]'s Articles
[Li, Bing]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Ruan, Xiaofeng]'s Articles
[Liu, Yufan]'s Articles
[Li, Bing]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: DPFPS_Dynamic and Progressive Filter Pruning for Compressing Convolutional Neural Networks from Scratch.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.