Unsupervised Network Quantization via Fixed-Point Factorization
Wang, Peisong1,2; He, Xiangyu1,2; Chen, Qiang1,2; Cheng, Anda1,2; Liu, Qingshan3; Cheng, Jian1,2
发表期刊IEEE Transactions on Neural Networks and Learning Systems
2020
期号1页码:1
摘要

The deep neural network (DNN) has achieved remarkable performance in a wide range of applications at the cost of huge memory and computational complexity. Fixed-point network quantization emerges as a popular acceleration and compression method but still suffers from huge performance degradation when extremely low-bit quantization is utilized. Moreover, current fixed-point quantization methods rely heavily on supervised retraining using large amounts of the labeled training data, while the labeled data are hard to obtain in the real-world applications. In this article, we propose an efficient framework, namely, fixed-point factorized network (FFN), to turn all weights into ternary values, i.e., {-1, 0, 1}. We highlight that the proposed FFN framework can achieve negligible degradation even without any supervised retraining on the labeled data. Note that the activations can be easily quantized into an 8-bit format; thus, the resulting networks only have low-bit fixed-point additions that are significantly more efficient than 32-bit floating-point multiply-accumulate operations (MACs). Extensive experiments on large-scale ImageNet classification and object detection on MS COCO show that the proposed FFN can achieve about more than 20x compression and remove most of the multiply operations with comparable accuracy. Codes are available on GitHub at https://github.com/wps712/FFN.

关键词Acceleration , compression , deep neural networks (DNNs) , fixed-point quantization , unsupervised quantization.
收录类别SCI
七大方向——子方向分类AI芯片与智能计算
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/40616
专题复杂系统认知与决策实验室_高效智能计算与学习
紫东太初大模型研究中心_图像与视频分析
通讯作者Liu, Qingshan
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Nanjing University of Information Science and Technology
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Wang, Peisong,He, Xiangyu,Chen, Qiang,et al. Unsupervised Network Quantization via Fixed-Point Factorization[J]. IEEE Transactions on Neural Networks and Learning Systems,2020(1):1.
APA Wang, Peisong,He, Xiangyu,Chen, Qiang,Cheng, Anda,Liu, Qingshan,&Cheng, Jian.(2020).Unsupervised Network Quantization via Fixed-Point Factorization.IEEE Transactions on Neural Networks and Learning Systems(1),1.
MLA Wang, Peisong,et al."Unsupervised Network Quantization via Fixed-Point Factorization".IEEE Transactions on Neural Networks and Learning Systems .1(2020):1.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
TNNLS-FFN-formal.pdf(1998KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Peisong]的文章
[He, Xiangyu]的文章
[Chen, Qiang]的文章
百度学术
百度学术中相似的文章
[Wang, Peisong]的文章
[He, Xiangyu]的文章
[Chen, Qiang]的文章
必应学术
必应学术中相似的文章
[Wang, Peisong]的文章
[He, Xiangyu]的文章
[Chen, Qiang]的文章
相关权益政策
暂无数据
收藏/分享
文件名: TNNLS-FFN-formal.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。