CASIA OpenIR  > 模式识别国家重点实验室  > 图像与视频分析
Two-Step Quantization for Low-bit Neural Networks
Wang, Peisong1,2; Hu, Qinghao1,2; Zhang, Yifan1,2; Zhang, Chunjie1,2; Liu, Yang4; Cheng, Jian1,2,3
Conference NameIEEE Conference on Computer Vision and Pattern Recognition
Conference Date2018.06.18-2018.06.22
Conference PlaceSalt Lake City

Every bit matters in the hardware design of quantized neural networks. However, extremely-low-bit representation usually causes large accuracy drop. Thus, how to train extremely-low-bit neural networks with high accuracy is of central importance. Most existing network quantization approaches learn transformations (low-bit weights) as well as encodings (low-bit activations) simultaneously. This tight coupling makes the optimization problem difficult, and thus prevents the network from learning optimal representations. In this paper, we propose a simple yet effective Two-Step Quantization (TSQ) framework, by decomposing the network quantization problem into two steps: code learning and transformation function learning based on the learned codes. For the first step, we propose the sparse quantization method for code learning. The second step can be formulated as a non-linear least square regression problem with low-bit constraints, which can be solved efficiently in an iterative manner. Extensive experiments on CIFAR-10 and ILSVRC-12 datasets demonstrate that the proposed TSQ is effective and outperforms the state-of-the-art by a large margin. Especially, for 2-bit activation and ternary weight quantization of AlexNet, the accuracy of our TSQ drops only about 0.5 points compared with the full-precision counterpart, outperforming current state-of-the-art by more than 5 points.

KeywordConvolutional Neural Networks Network Quantization Ternary Quantization
Indexed ByEI
Document Type会议论文
Affiliation1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Center for Excellence in Brain Science and Intelligence Technology
4.Alibaba Group
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Wang, Peisong,Hu, Qinghao,Zhang, Yifan,et al. Two-Step Quantization for Low-bit Neural Networks[C],2018.
Files in This Item: Download All
File Name/Size DocType Version Access License
TSQ.pdf(105KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Wang, Peisong]'s Articles
[Hu, Qinghao]'s Articles
[Zhang, Yifan]'s Articles
Baidu academic
Similar articles in Baidu academic
[Wang, Peisong]'s Articles
[Hu, Qinghao]'s Articles
[Zhang, Yifan]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Wang, Peisong]'s Articles
[Hu, Qinghao]'s Articles
[Zhang, Yifan]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: TSQ.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.