Two-Step Quantization for Low-bit Neural Networks
Wang, Peisong1,2; Hu, Qinghao1,2; Zhang, Yifan1,2; Zhang, Chunjie1,2; Liu, Yang4; Cheng, Jian1,2,3
2018-06
会议名称IEEE Conference on Computer Vision and Pattern Recognition
会议日期2018.06.18-2018.06.22
会议地点Salt Lake City
摘要

Every bit matters in the hardware design of quantized neural networks. However, extremely-low-bit representation usually causes large accuracy drop. Thus, how to train extremely-low-bit neural networks with high accuracy is of central importance. Most existing network quantization approaches learn transformations (low-bit weights) as well as encodings (low-bit activations) simultaneously. This tight coupling makes the optimization problem difficult, and thus prevents the network from learning optimal representations. In this paper, we propose a simple yet effective Two-Step Quantization (TSQ) framework, by decomposing the network quantization problem into two steps: code learning and transformation function learning based on the learned codes. For the first step, we propose the sparse quantization method for code learning. The second step can be formulated as a non-linear least square regression problem with low-bit constraints, which can be solved efficiently in an iterative manner. Extensive experiments on CIFAR-10 and ILSVRC-12 datasets demonstrate that the proposed TSQ is effective and outperforms the state-of-the-art by a large margin. Especially, for 2-bit activation and ternary weight quantization of AlexNet, the accuracy of our TSQ drops only about 0.5 points compared with the full-precision counterpart, outperforming current state-of-the-art by more than 5 points.


关键词Convolutional Neural Networks Network Quantization Ternary Quantization
收录类别EI
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/20898
专题模式识别国家重点实验室_图像与视频分析
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Center for Excellence in Brain Science and Intelligence Technology
4.Alibaba Group
推荐引用方式
GB/T 7714
Wang, Peisong,Hu, Qinghao,Zhang, Yifan,et al. Two-Step Quantization for Low-bit Neural Networks[C],2018.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
TSQ.pdf(105KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Peisong]的文章
[Hu, Qinghao]的文章
[Zhang, Yifan]的文章
百度学术
百度学术中相似的文章
[Wang, Peisong]的文章
[Hu, Qinghao]的文章
[Zhang, Yifan]的文章
必应学术
必应学术中相似的文章
[Wang, Peisong]的文章
[Hu, Qinghao]的文章
[Zhang, Yifan]的文章
相关权益政策
暂无数据
收藏/分享
文件名: TSQ.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。