Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks
Cheng, Jian1,2,3; Wu, Jiaxiang1,2,4; Leng, Cong1,2; Wang, Yuhang1,2,5; Hu, Qinghao1,2
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X
2018-10-01
卷号29期号:10页码:4730-4743
文章类型Article
摘要We are witnessing an explosive development and widespread application of deep neural networks (DNNs) in various fields. However, DNN models, especially a convolutional neural network (CNN), usually involve massive parameters and are computationally expensive, making them extremely dependent on high-performance hardware. This prohibits their further extensions, e.g., applications on mobile devices. In this paper, we present a quantized CNN, a unified approach to accelerate and compress convolutional networks. Guided by minimizing the approximation error of individual layer's response, both fully connected and convolutional layers are carefully quantized. The inference computation can be effectively carried out on the quantized network, with much lower memory and storage consumption. Quantitative evaluation on two publicly available benchmarks demonstrates the promising performance of our approach: with comparable classification accuracy, it achieves 4 to 6x acceleration and 15 to 20x compression. With our method, accurate image classification can even be directly carried out on mobile devices within 1 s.
关键词Acceleration And Compression Convolutional Neural Network (Cnn) Mobile Devices Product Quantization
WOS标题词Science & Technology ; Technology
DOI10.1109/TNNLS.2017.2774288
关键词[WOS]LEARNING BINARY-CODES ; ITERATIVE QUANTIZATION ; PROCRUSTEAN APPROACH ; RECOGNITION ; FPGAS
收录类别SCI
语种英语
项目资助者National Natural Science Foundation of China(61332016) ; Scientific Research Key Program of Beijing Municipal Commission of Education(KZ201610005012) ; Fund of Hubei Key Laboratory of Transportation Internet of Things ; Fund of Jiangsu Key Laboratory of Big Data Analysis Technology
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000445351300015
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
引用统计
被引频次:68[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/27921
专题复杂系统认知与决策实验室_高效智能计算与学习
通讯作者Cheng, Jian
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Beijing 100190, Peoples R China
3.CAS Ctr Excellence Brain Sci & Intelligence Techn, Beijing 100190, Peoples R China
4.Tencent AI Lab, Machine Learning Grp, Shenzhen 518000, Peoples R China
5.UISEE Technol Beijing Ltd, Beijing 102402, Peoples R China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Cheng, Jian,Wu, Jiaxiang,Leng, Cong,et al. Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2018,29(10):4730-4743.
APA Cheng, Jian,Wu, Jiaxiang,Leng, Cong,Wang, Yuhang,&Hu, Qinghao.(2018).Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,29(10),4730-4743.
MLA Cheng, Jian,et al."Quantized CNN: A Unified Approach to Accelerate and Compress Convolutional Networks".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 29.10(2018):4730-4743.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Cheng, Jian]的文章
[Wu, Jiaxiang]的文章
[Leng, Cong]的文章
百度学术
百度学术中相似的文章
[Cheng, Jian]的文章
[Wu, Jiaxiang]的文章
[Leng, Cong]的文章
必应学术
必应学术中相似的文章
[Cheng, Jian]的文章
[Wu, Jiaxiang]的文章
[Leng, Cong]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。