CASIA OpenIR  > 模式识别国家重点实验室  > 模式分析与学习
Data-Distortion Guided Self-Distillation for Deep Neural Networks
Xu, Ting-Bing1,2; Liu, Cheng-Lin1,2,3
2019
Conference NameThe Thirty-Third AAAI Conference on Artificial Intelligence
Conference Date2019-01-27
Conference PlaceHawaii, USA
Abstract

Knowledge distillation is an effective technique that has been widely used for transferring knowledge from a network to another network. Despite its effective improvement of network performance, the dependence of accompanying assistive models complicates the training process of single network in the need of large memory and time cost. In this paper, we design a more elegant self-distillation mechanism to transfer knowledge between different distorted versions of same training data without the reliance on accompanying models. Specifically, the potential capacity of single network is excavated by learning consistent global feature distributions and posterior distributions (class probabilities) across these distorted versions of data. Extensive experiments on multiple datasets (i.e., CIFAR-10/100 and ImageNet) demonstrate that the proposed method can effectively improve the generalization performance of various network architectures (such as AlexNet, ResNet, Wide ResNet, and DenseNet), outperform existing distillation methods with little extra training efforts. 

Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/26227
Collection模式识别国家重点实验室_模式分析与学习
Corresponding AuthorXu, Ting-Bing
Affiliation1.National Laboratory of Pattern Recognition, Institute of Automation of Chinese Academy of Sciences, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.CAS Center for Excellence of Brain Science and Intelligence Technology, Beijing, China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Xu, Ting-Bing,Liu, Cheng-Lin. Data-Distortion Guided Self-Distillation for Deep Neural Networks[C],2019.
Files in This Item:
File Name/Size DocType Version Access License
4498-Article Text-75(1299KB)会议论文 开放获取CC BY-NC-SAView Application Full Text
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xu, Ting-Bing]'s Articles
[Liu, Cheng-Lin]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xu, Ting-Bing]'s Articles
[Liu, Cheng-Lin]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xu, Ting-Bing]'s Articles
[Liu, Cheng-Lin]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 4498-Article Text-7537-1-10-20190706.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.