Hardware acceleration of CNN with one-hot quantization of weights and activations
Li, Gang1,2; Wang, Peisong1,2; Liu, Zejian1,2; Leng, Cong1,2; Cheng, Jian1,2
2020
会议名称Design, Automation & Test in Europe Conference & Exhibition (DATE)
会议日期2020-3
会议地点Grenoble, France
摘要

In this paper, we propose a novel one-hot representation for weights and activations in CNN model and demonstrate its benefits on hardware accelerator design. Specifically, rather than merely reducing the bitwidth, we quantize both weights and activations into n-bit integers that containing only one non-zero bit per value. In this way, the massive multiply and accumulates (MACs) are equivalent to additions of powers of two that can be efficiently calculated with histogram based computations. Experiments on the ImageNet classification task show that comparable accuracy can be obtained on our proposed One-Hot Networks (OHN) compared to conventional fixed-point networks. As case studies, we evaluate the efficacy of the one-hot data representation on two state-of-the-art CNN accelerators on FPGA, our preliminary results show that 50% and 68.5% resource saving can be achieved on DaDianNao and Laconic respectively. Besides, the one-hot optimized Laconic can further achieve an average speedup of 4.94× on AlexNet.

收录类别EI
七大方向——子方向分类图像视频处理与分析
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/40617
专题紫东太初大模型研究中心_图像与视频分析
复杂系统认知与决策实验室_高效智能计算与学习
通讯作者Cheng, Jian
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li, Gang,Wang, Peisong,Liu, Zejian,et al. Hardware acceleration of CNN with one-hot quantization of weights and activations[C],2020.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Gang]的文章
[Wang, Peisong]的文章
[Liu, Zejian]的文章
百度学术
百度学术中相似的文章
[Li, Gang]的文章
[Wang, Peisong]的文章
[Liu, Zejian]的文章
必应学术
必应学术中相似的文章
[Li, Gang]的文章
[Wang, Peisong]的文章
[Liu, Zejian]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。