Deep Neural Network Self-Distillation Exploiting Data Representation Invariance
Xu, Ting-Bing1,2,3; Liu, Cheng-Lin3,4,5
发表期刊IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS
ISSN2162-237X
2022
卷号33期号:1页码:257-269
通讯作者Liu, Cheng-Lin(liucl@nlpr.ia.ac.cn)
摘要To harvest small networks with high accuracies, most existing methods mainly utilize compression techniques such as low-rank decomposition and pruning to compress a trained large model into a small network or transfer knowledge from a powerful large model (teacher) to a small network (student). Despite their success in generating small models of high performance, the dependence of accompanying assistive models complicates the training process and increases memory and time cost. In this article, we propose an elegant self-distillation (SD) mechanism to obtain high-accuracy models directly without going through an assistive model. Inspired by the invariant recognition in the human vision system, different distorted instances of the same input should possess similar high-level data representations. Thus, we can learn data representation invariance between different distorted versions of the same sample. Especially, in our learning algorithm based on SD, the single network utilizes the maximum mean discrepancy metric to learn the global feature consistency and the Kullback-Leibler divergence to constrain the posterior class probability consistency across the different distorted branches. Extensive experiments on MNIST, CIFAR-10/100, and ImageNet data sets demonstrate that the proposed method can effectively reduce the generalization error for various network architectures, such as AlexNet, VGGNet, ResNet, Wide ResNet, and DenseNet, and outperform existing model distillation methods with little extra training efforts.
关键词Training Nonlinear distortion Data models Neural networks Knowledge engineering Network architecture Generalization error network compression representation invariance self-distillation (SD)
DOI10.1109/TNNLS.2020.3027634
关键词[WOS]CONVOLUTIONAL NETWORKS
收录类别SCI
语种英语
资助项目Major Project for New Generation of Artificial Intelligence (AI)[2018AAA0100400] ; National Natural Science Foundation of China (NSFC)[61836014] ; National Natural Science Foundation of China (NSFC)[61721004] ; Ministry of Science and Technology of China
项目资助者Major Project for New Generation of Artificial Intelligence (AI) ; National Natural Science Foundation of China (NSFC) ; Ministry of Science and Technology of China
WOS研究方向Computer Science ; Engineering
WOS类目Computer Science, Artificial Intelligence ; Computer Science, Hardware & Architecture ; Computer Science, Theory & Methods ; Engineering, Electrical & Electronic
WOS记录号WOS:000739635300025
出版者IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
七大方向——子方向分类模式识别基础
引用统计
被引频次:13[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/47158
专题多模态人工智能系统全国重点实验室_模式分析与学习
通讯作者Liu, Cheng-Lin
作者单位1.Beihang Univ, Sch Instrumentat Sci & Optoelect Engn, Beijing 100191, Peoples R China
2.Chinese Acad Sci CASIA, Natl Lab Pattern Recognit NLPR, Inst Automat, Beijing 100190, Peoples R China
3.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China
4.Chinese Acad Sci, Natl Lab Pattern Recognit, Inst Automat, Beijing 100190, Peoples R China
5.CAS Ctr Excellence Brain Sci & Intelligence Techn, Shanghai 200031, Peoples R China
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Xu, Ting-Bing,Liu, Cheng-Lin. Deep Neural Network Self-Distillation Exploiting Data Representation Invariance[J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,2022,33(1):257-269.
APA Xu, Ting-Bing,&Liu, Cheng-Lin.(2022).Deep Neural Network Self-Distillation Exploiting Data Representation Invariance.IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS,33(1),257-269.
MLA Xu, Ting-Bing,et al."Deep Neural Network Self-Distillation Exploiting Data Representation Invariance".IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 33.1(2022):257-269.
条目包含的文件
条目无相关文件。
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xu, Ting-Bing]的文章
[Liu, Cheng-Lin]的文章
百度学术
百度学术中相似的文章
[Xu, Ting-Bing]的文章
[Liu, Cheng-Lin]的文章
必应学术
必应学术中相似的文章
[Xu, Ting-Bing]的文章
[Liu, Cheng-Lin]的文章
相关权益政策
暂无数据
收藏/分享
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。