CASIA OpenIR

浏览/检索结果: 共17条,第1-10条 帮助

限定条件                    
已选(0)清除 条数/页:   排序方式:
Towards Automatic Model Compression via A Unified Two-Stage Framework 期刊论文
Pattern Recognition (PR), 2023, 卷号: 140, 页码: 109527
作者:  Weihan Chen;  Peisong Wang;  Jian Cheng
Adobe PDF(765Kb)  |  收藏  |  浏览/下载:107/33  |  提交时间:2023/06/20
Deep Neural Networks  Model Compression  Quantization  Pruning  
Improving Extreme Low-bit Quantization with Soft Threshold 期刊论文
IEEE Transactions on Circuits and Systems for Video Technology, 2022, 页码: 1549 - 1563
作者:  Xu WX(许伟翔);  Wang PS(王培松);  Cheng J(程健)
Adobe PDF(2414Kb)  |  收藏  |  浏览/下载:75/26  |  提交时间:2023/06/20
BSTG-Trans: A Bayesian Spatial-Temporal Graph Transformer for Long-term Pose Forecasting 期刊论文
IEEE Transactions on Multimedia, 2023, 卷号: Early Access, 期号: Early Access, 页码: Early Access
作者:  Shentong Mo;  Xin M(辛淼)
Adobe PDF(2209Kb)  |  收藏  |  浏览/下载:94/16  |  提交时间:2023/04/25
long-term forecasting  spatial-temporal graph transformer  Bayesian transformer  uncertainty estimation  
Optimization-Based Post-Training Quantization With Bit-Split and Stitching 期刊论文
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 卷号: 45, 期号: 2, 页码: 2119-2135
作者:  Wang, Peisong;  Chen, Weihan;  He, Xiangyu;  Chen, Qiang;  Liu, Qingshan;  Cheng, Jian
Adobe PDF(921Kb)  |  收藏  |  浏览/下载:174/50  |  提交时间:2023/03/20
Deep neural networks  compression  quantization  post-training quantization  
Toward Accurate Binarized Neural Networks With Sparsity for Mobile Application 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2022, 页码: 13
作者:  Wang, Peisong;  He, Xiangyu;  Cheng, Jian
收藏  |  浏览/下载:231/0  |  提交时间:2022/07/25
Quantization (signal)  Deep learning  Convolution  Training  Biological neural networks  Optimization  Neurons  Acceleration  binarized neural networks (BNNs)  compression  fixed-point quantization  
Block Convolution: Toward Memory-Efficient Inference of Large-Scale CNNs on FPGA 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2022, 卷号: 41, 期号: 5, 页码: 1436-1447
作者:  Li, Gang;  Liu, Zejian;  Li, Fanrong;  Cheng, Jian
Adobe PDF(4046Kb)  |  收藏  |  浏览/下载:264/26  |  提交时间:2022/06/10
Convolution  Field programmable gate arrays  System-on-chip  Task analysis  Random access memory  Tensors  Memory management  Block convolution  convolutional neural network (CNN) accelerator  field-programmable gate array (FPGA)  memory efficient  off-chip transfer  
Block Convolution: Towards Memory-Efficient Inference of Large-Scale CNNs on FPGA 期刊论文
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2021, 期号: 2021.5, 页码: 1-1
作者:  Li, Gang;  Liu, Zejian;  Li, Fanrong;  Cheng, Jian
Adobe PDF(6174Kb)  |  收藏  |  浏览/下载:185/38  |  提交时间:2022/02/15
block convolution  memory-efficient  off-chip transfer  fpga  cnn accelerator  
Extremely Sparse Networks via Binary Augmented Pruning for Fast Image Classification 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 14
作者:  Wang, Peisong;  Li, Fanrong;  Li, Gang;  Cheng, Jian
收藏  |  浏览/下载:190/0  |  提交时间:2022/01/27
Hardware acceleration  image classification  neural networks  pruning  software-hardware codesign  
ECBC: Efficient Convolution via Blocked Columnizing 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 页码: 13
作者:  Zhao, Tianli;  Hu, Qinghao;  He, Xiangyu;  Xu, Weixiang;  Wang, Jiaxing;  Leng, Cong;  Cheng, Jian
Adobe PDF(3003Kb)  |  收藏  |  浏览/下载:292/32  |  提交时间:2022/01/27
Convolution  Tensors  Layout  Memory management  Indexes  Transforms  Performance evaluation  Convolutional neural networks (CNNs)  direct convolution  high performance computing for mobile devices  im2col convolution  memory-efficient convolution (MEC)  
FSA: A Fine-Grained Systolic Accelerator for Sparse CNNs 期刊论文
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 卷号: 39, 期号: 11, 页码: 3589-3600
作者:  Li, Fanrong;  Li, Gang;  Mo, Zitao;  He, Xiangyu;  Cheng, Jian
Adobe PDF(1906Kb)  |  收藏  |  浏览/下载:329/58  |  提交时间:2021/01/06
Accelerator  architecture  convolutional neural networks (CNNs)  sparsity