CASIA OpenIR
(本次检索基于用户作品认领结果)

浏览/检索结果: 共53条,第1-10条 帮助

限定条件        
已选(0)清除 条数/页:   排序方式:
Multi-Granularity Pruning for Model Acceleration on Mobile Devices 会议论文
, 线上, 2022-07
作者:  Zhao TL(赵天理);  Zhang X(张希);  Zhu WT(朱文涛);  Wang JX(王家兴);  Yang S(杨森);  Liu J(刘季);  Cheng J(程健)
Adobe PDF(1919Kb)  |  收藏  |  浏览/下载:154/61  |  提交时间:2023/06/21
Deep Neural Networks  Network Pruning  Structured Pruning  Non-structured Pruning  Single Instruction Multiple Data  
Towards Automatic Model Compression via A Unified Two-Stage Framework 期刊论文
Pattern Recognition (PR), 2023, 卷号: 140, 页码: 109527
作者:  Weihan Chen;  Peisong Wang;  Jian Cheng
Adobe PDF(765Kb)  |  收藏  |  浏览/下载:137/44  |  提交时间:2023/06/20
Deep Neural Networks  Model Compression  Quantization  Pruning  
Towards Fully Sparse Training: Information Restoration with Spatial Similarity 会议论文
, Vancouver, British Columbia, Canada, 2022-04
作者:  Xu WX(许伟翔);  Wang PS(王培松);  Cheng J(程健)
Adobe PDF(556Kb)  |  收藏  |  浏览/下载:89/30  |  提交时间:2023/06/20
Improving Extreme Low-bit Quantization with Soft Threshold 期刊论文
IEEE Transactions on Circuits and Systems for Video Technology, 2022, 页码: 1549 - 1563
作者:  Xu WX(许伟翔);  Wang PS(王培松);  Cheng J(程健)
Adobe PDF(2414Kb)  |  收藏  |  浏览/下载:85/32  |  提交时间:2023/06/20
Towards Mixed-Precision Quantization of Neural Networks via Constrained Optimization 会议论文
, 线上举办, 2021-10-11
作者:  Weihan Chen;  Peisong Wang;  Jian Cheng
Adobe PDF(696Kb)  |  收藏  |  浏览/下载:151/45  |  提交时间:2023/06/20
DPNAS: Neural Architecture Search for Deep Learning with Differential Privacy 会议论文
, 线上, 2022-2
作者:  Cheng AD(程安达);  Wang JX(王家兴);  Zhang X(张希);  Chen Q(谌强);  Wang PS(王培松);  Cheng J(程健)
Adobe PDF(1135Kb)  |  收藏  |  浏览/下载:111/30  |  提交时间:2023/06/05
APRIL: Finding the Achilles' Heel on Privacy for Vision Transformers 会议论文
, New Orleans, Louisiana, USA, 2022-6
作者:  Jiahao, Lu;  Xi Sheryl, Zhang;  Tianli, Zhao;  Xiangyu, He;  Jian Cheng
Adobe PDF(2770Kb)  |  收藏  |  浏览/下载:247/66  |  提交时间:2022/06/23
Trustworthy AI  Privacy-preserving machine learning  
Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA 会议论文
, Dresden, Germany, 2018
作者:  Li, Gang;  Li, Fanrong;  Zhao, Tianli;  Cheng, Jian
Adobe PDF(244Kb)  |  收藏  |  浏览/下载:151/55  |  提交时间:2022/06/14
Learning Compression from Limited Unlabeled Data 会议论文
, Munich, Germany, September 8 – 14, 2018
作者:  He, Xiangyu;  Cheng, Jian
Adobe PDF(504Kb)  |  收藏  |  浏览/下载:91/32  |  提交时间:2022/06/14
Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 5, 页码: 1436-1447
作者:  Li, Gang;  Liu, Zejian;  Li, Fanrong;  Cheng, Jian
Adobe PDF(4046Kb)  |  收藏  |  浏览/下载:314/36  |  提交时间:2022/06/10
Convolution  Field programmable gate arrays  System-on-chip  Task analysis  Random access memory  Tensors  Memory management  Block convolution  convolutional neural network (CNN) accelerator  field-programmable gate array (FPGA)  memory efficient  off-chip transfer