Knowledge Commons of Institute of Automation,CAS
Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression | |
Yufan Liu1,2; Jiajiong Cao5; Bing Li1,4; Weiming Hu1,2,3; Stephen Maybank6 | |
发表期刊 | IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) |
2022-06 | |
卷号 | 45期号:3页码:3378-3395 |
摘要 | Deep learning shows excellent performance usually at the expense of heavy computation. Recently, model compression has become a popular way of reducing the computation. Compression can be achieved using knowledge distillation or filter pruning. Knowledge distillation improves the accuracy of a lightweight network, while filter pruning removes redundant architecture in a cumbersome network. They are two different ways of achieving model compression, but few methods simultaneously consider both of them. In this paper, we revisit model compression and define two attributes of a model: distillability and sparsability, which reflect how much useful knowledge can be distilled and how many pruned ratios can be obtained, respectively. Guided by our observations and considering both accuracy and model size, a dynamically distillability-and-sparsability learning framework (DDSL) is introduced for model compression. DDSL consists of teacher, student and dean. Knowledge is distilled from the teacher to guide the student. The dean controls the training process by dynamically adjusting the distillation supervision and the sparsity supervision in a meta-learning framework. An alternating direction method of multiplier (ADMM)-based knowledge distillation-with-pruning (KDP) joint optimization algorithm is proposed to train the model. Extensive experimental results show that DDSL outperforms 24 state-of-the-art methods, including both knowledge distillation and filter pruning methods. |
七大方向——子方向分类 | 图像视频处理与分析 |
国重实验室规划方向分类 | 智能计算与学习 |
是否有论文关联数据集需要存交 | 否 |
文献类型 | 期刊论文 |
条目标识符 | http://ir.ia.ac.cn/handle/173211/51487 |
专题 | 多模态人工智能系统全国重点实验室_视频内容安全 |
作者单位 | 1.Institution of Automation, Chinese Academy of Sciences 2.the School of Artificial Intelligence, University of Chinese Academy of Sciences 3.the CAS Center for Excellence in Brain Science and Intelligence Technology 4.People AI, Inc. 5.Ant Group 6.the Department of Computer Science and Information Systems, Birkbeck College, University of London |
推荐引用方式 GB/T 7714 | Yufan Liu,Jiajiong Cao,Bing Li,et al. Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),2022,45(3):3378-3395. |
APA | Yufan Liu,Jiajiong Cao,Bing Li,Weiming Hu,&Stephen Maybank.(2022).Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression.IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI),45(3),3378-3395. |
MLA | Yufan Liu,et al."Learning to Explore Distillability and Sparsability: A Joint Framework for Model Compression".IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 45.3(2022):3378-3395. |
条目包含的文件 | 下载所有文件 | |||||
文件名称/大小 | 文献类型 | 版本类型 | 开放类型 | 使用许可 | ||
TPAMI2022_DDSL_LIU.p(3314KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | 浏览 下载 |
除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。
修改评论