CASIA OpenIR  > 精密感知与控制研究中心  > 精密感知与控制
Dual-discriminator adversarial framework for data-free quantization
Li, Zhikai1,2; Ma, Liping1; Long, Xianlei1,2; Xiao, Junrui1,2; Gu, Qingyi1
Source PublicationNEUROCOMPUTING
ISSN0925-2312
2022-10-28
Volume511Pages:67-77
Abstract

Thanks to the potential to address the privacy and security issues, data-free quantization that generates samples based on the prior information in the model has recently been widely investigated. However, existing methods failed to adequately utilize the prior information and thus cannot fully restore the real-data characteristics and provide effective supervision to the quantized model, resulting in poor performance. In this paper, we propose Dual-Discriminator Adversarial Quantization (DDAQ), a novel data-free quantization framework with an adversarial learning style that enables effective sample generation and learning of the quantized model. Specifically, we employ a generator to produce meaningful and diverse samples directed by two discriminators, aiming to facilitate the matching of the batch normalization (BN) distribution and maximizing the discrepancy between the full-precision model and the quantized model, respectively. Moreover, inspired by mixed-precision quantization, i.e., the importance of each layer is different, we introduce layer importance prior to both discriminators, allowing us to make better use of the information in the model. Subsequently, the quantized model is trained with the generated samples under the supervision of the full-precision model. We evaluate DDAQ on various network structures for different vision tasks, including image classification and object detection, and the experimental results show that DDAQ outperforms all baseline methods with good generality. (C) 2022 Elsevier B.V. All rights reserved.

Other Abstract

 

KeywordModel compression Quantized neural networks Data-free quantization
DOI10.1016/j.neucom.2022.09.076
Indexed BySCI
Language英语
Funding ProjectScientific Instrument Developing Project of the Chinese Academy of Sciences[YJKYYQ20200045]
Funding OrganizationScientific Instrument Developing Project of the Chinese Academy of Sciences
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:000871948700006
PublisherELSEVIER
Sub direction classification机器学习
planning direction of the national heavy laboratory环境多维感知
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/50524
Collection精密感知与控制研究中心_精密感知与控制
Corresponding AuthorGu, Qingyi
Affiliation1.Chinese Acad Sci, Inst Automat, East Zhongguancun Rd, Beijing, Peoples R China
2.Univ Chinese Acad Sci, Sch Artificial Intelligence, Jingjia Rd, Beijing, Peoples R China
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Li, Zhikai,Ma, Liping,Long, Xianlei,et al. Dual-discriminator adversarial framework for data-free quantization[J]. NEUROCOMPUTING,2022,511:67-77.
APA Li, Zhikai,Ma, Liping,Long, Xianlei,Xiao, Junrui,&Gu, Qingyi.(2022).Dual-discriminator adversarial framework for data-free quantization.NEUROCOMPUTING,511,67-77.
MLA Li, Zhikai,et al."Dual-discriminator adversarial framework for data-free quantization".NEUROCOMPUTING 511(2022):67-77.
Files in This Item: Download All
File Name/Size DocType Version Access License
1-s2.0-S092523122201(1512KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Li, Zhikai]'s Articles
[Ma, Liping]'s Articles
[Long, Xianlei]'s Articles
Baidu academic
Similar articles in Baidu academic
[Li, Zhikai]'s Articles
[Ma, Liping]'s Articles
[Long, Xianlei]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Li, Zhikai]'s Articles
[Ma, Liping]'s Articles
[Long, Xianlei]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 1-s2.0-S0925231222011420-main.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.