CASIA OpenIR

浏览/检索结果: 共20条,第1-10条 帮助

已选(0)清除 条数/页:   排序方式:
Effective Model Compression via Stage-wise Pruning 期刊论文
Machine Intelligence Research, 2023, 卷号: 20, 期号: 6, 页码: 937-951
作者:  Ming-Yang Zhang;  Xin-Yi Yu;  Lin-Lin Ou
Adobe PDF(2394Kb)  |  收藏  |  浏览/下载:10/3  |  提交时间:2024/04/23
Automated machine learning (AutoML), channel pruning, model compression, distillation, convolutional neural networks (CNN)  
Pruning-aware Sparse Regularization for Network Pruning 期刊论文
Machine Intelligence Research, 2023, 卷号: 20, 期号: 1, 页码: 109-120
作者:  Nan-Fei Jiang
Adobe PDF(1665Kb)  |  收藏  |  浏览/下载:21/6  |  提交时间:2024/04/23
Deep learning  convolutional neural network (CNN)  model compression and acceleration  network pruning  regularization  
ABCP: Automatic Blockwise and Channelwise Network Pruning via Joint Search 期刊论文
IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2023, 卷号: 15, 期号: 3, 页码: 1560-1573
作者:  Li, Jiaqi;  Li, Haoran;  Chen, Yaran;  Ding, Zixiang;  Li, Nannan;  Ma, Mingjun;  Duan, Zicheng;  Zhao, Dongbin
收藏  |  浏览/下载:118/0  |  提交时间:2023/12/21
Joint search  model compression  pruning  reinforcement learning  
PSAQ-ViT V2: Toward Accurate and General Data-Free Quantization for Vision Transformers 期刊论文
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 页码: 12
作者:  Li, Zhikai;  Chen, Mengjuan;  Xiao, Junrui;  Gu, Qingyi
收藏  |  浏览/下载:58/0  |  提交时间:2023/11/17
Data-free quantization  model compression  patch similarity  quantized vision transformers (ViTs)  
PKD: General Distillation Framework for Object Detectors via Pearson Correlation Coefficient 会议论文
, New Orleans, America, Monday November 28th through Friday December 9th
作者:  Weihan, Cao;  Yifan, Zhang;  Jianfei, Gao;  Anda, Cheng;  Ke, Cheng;  Jian, Cheng
Adobe PDF(2614Kb)  |  收藏  |  浏览/下载:100/27  |  提交时间:2023/06/21
Knowledge Distillation  Model Compression  Object Detection  
Towards Automatic Model Compression via A Unified Two-Stage Framework 期刊论文
Pattern Recognition (PR), 2023, 卷号: 140, 页码: 109527
作者:  Weihan Chen;  Peisong Wang;  Jian Cheng
Adobe PDF(765Kb)  |  收藏  |  浏览/下载:114/36  |  提交时间:2023/06/20
Deep Neural Networks  Model Compression  Quantization  Pruning  
Cross-Architecture Knowledge Distillation 会议论文
INTERNATIONAL JOURNAL OF COMPUTER VISION, Macau SAR, China, 2022.12.4-2022.12.8
作者:  Yufan Liu;  Jiajiong Cao;  Bing Li;  Weiming Hu;  Jingting Ding;  Liang Li
Adobe PDF(1020Kb)  |  收藏  |  浏览/下载:147/43  |  提交时间:2023/04/23
Knowledge distillation  Cross architecture  Model compression  Deep learning  
QSFM: Model Pruning Based on Quantified Similarity Between Feature Maps for AI on Edge 期刊论文
IEEE INTERNET OF THINGS JOURNAL, 2022, 卷号: 9, 期号: 23, 页码: 24506-24515
作者:  Wang, Zidu;  Liu, Xuexin;  Huang, Long;  Chen, Yunqing;  Zhang, Yufei;  Lin, Zhikang;  Wang, Rui
收藏  |  浏览/下载:141/0  |  提交时间:2023/02/22
Tensors  Internet of Things  Convolution  Three-dimensional displays  Quantization (signal)  Hardware  Training  Edge computing  filter pruning  Internet of Things (IoT)  model compression  neural networks  
Dual-discriminator adversarial framework for data-free quantization 期刊论文
NEUROCOMPUTING, 2022, 卷号: 511, 页码: 67-77
作者:  Li, Zhikai;  Ma, Liping;  Long, Xianlei;  Xiao, Junrui;  Gu, Qingyi
Adobe PDF(1512Kb)  |  收藏  |  浏览/下载:324/68  |  提交时间:2022/11/21
Model compression  Quantized neural networks  Data-free quantization  
Efficient convolutional networks learning through irregular convolutional kernels 期刊论文
NEUROCOMPUTING, 2022, 卷号: 489, 页码: 167-178
作者:  Guo, Weiyu;  Ma, Jiabin;  Ouyang, Yidong;  Wang, Liang;  Huang, Yongzhen
收藏  |  浏览/下载:210/0  |  提交时间:2022/06/10
Model compression  Interpolation  Irregular convolutional kernels