Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA
Li, Gang1,2; Liu, Zejian1,3; Li, Fanrong1,3; Cheng, Jian1,4,5
发表期刊IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS
ISSN0278-0070
2022-05-01
卷号41期号:5页码:1436-1447
摘要

Deep convolutional neural networks have achieved remarkable progress in recent years. However, the large volume of intermediate results generated during inference poses a significant challenge to the accelerator design for resource-constrained field-programmable gate array (FPGA). Due to the limited on-chip storage, partial results of intermediate layers are frequently transferred back and forth between on-chip memory and off-chip DRAM, leading to a nonnegligible increase in latency and energy consumption. In this article, we propose block convolution, a hardware-friendly, simple, yet efficient convolution operation that can completely avoid the off-chip transfer of intermediate feature maps at runtime. The fundamental idea of block convolution is to eliminate the dependency of feature map tiles in the spatial dimension when spatial tiling is used, which is realized by splitting a feature map into independent blocks so that convolution can be performed separately on individual blocks. We conduct extensive experiments to demonstrate the efficacy of the proposed block convolution on both the algorithm side and the hardware side. Specifically, we evaluate block convolution on: 1) VGG-16, ResNet-18, ResNet-50, and MobileNet-V1 for the ImageNet classification task; 2) SSD and FPN for the COCO object detection task; and 3) VDSR for the Set5 single-image superresolution task. Experimental results demonstrate that comparable or higher accuracy can be achieved with block convolution. We also showcase two CNN accelerators via algorithm/hardware co-design based on block convolution on memory-limited FPGAs, and evaluation shows that both accelerators substantially outperform the baseline without off-chip transfer of intermediate feature maps.

关键词Convolution Field programmable gate arrays System-on-chip Task analysis Random access memory Tensors Memory management Block convolution convolutional neural network (CNN) accelerator field-programmable gate array (FPGA) memory efficient off-chip transfer
DOI10.1109/TCAD.2021.3082868
收录类别SCI
语种英语
资助项目National Natural Science Foundation of China[61972396] ; National Key Research and Development Program of China[2020AAA0103402] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDA27040300] ; Strategic Priority Research Program of Chinese Academy of Sciences[XDB32050200]
项目资助者National Natural Science Foundation of China ; National Key Research and Development Program of China ; Strategic Priority Research Program of Chinese Academy of Sciences
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Hardware & Architecture ; Computer Science, Interdisciplinary Applications ; Engineering, Electrical & Electronic
WOS记录号WOS:000784196800022
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类AI芯片与智能计算
国重实验室规划方向分类其他
是否有论文关联数据集需要存交
引用统计
被引频次:20[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/48339
专题复杂系统认知与决策实验室_高效智能计算与学习
通讯作者Cheng, Jian
作者单位1.Chinese Acad Sci, Inst Automat, Beijing 100190, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
3.Univ Chinese Acad Sci, Sch Future Technol, Beijing 100049, Peoples R China
4.Chinese Acad Sci, Ctr Excellence Brain Sci & Intelligence Technol, Beijing 100190, Peoples R China
5.Univ Chinese Acad Sci, Beijing 100049, Peoples R China
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Li, Gang,Liu, Zejian,Li, Fanrong,et al. Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA[J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,2022,41(5):1436-1447.
APA Li, Gang,Liu, Zejian,Li, Fanrong,&Cheng, Jian.(2022).Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA.IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS,41(5),1436-1447.
MLA Li, Gang,et al."Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA".IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS 41.5(2022):1436-1447.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Block_Convolution_To(4046KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Gang]的文章
[Liu, Zejian]的文章
[Li, Fanrong]的文章
百度学术
百度学术中相似的文章
[Li, Gang]的文章
[Liu, Zejian]的文章
[Li, Fanrong]的文章
必应学术
必应学术中相似的文章
[Li, Gang]的文章
[Liu, Zejian]的文章
[Li, Fanrong]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Block_Convolution_Toward_Memory-Efficient_Inference_of_Large-Scale_CNNs_on_FPGA.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。