CASIA OpenIR  > 多模态人工智能系统全国重点实验室  > 视频内容安全
Cross-Architecture Knowledge Distillation
Yufan Liu1,2; Jiajiong Cao5; Bing Li1,4; Weiming Hu1,2,3; Jingting Ding5; Liang Li5
2022-12
Conference NameProceedings of the Asian Conference on Computer Vision (ACCV)
Source PublicationINTERNATIONAL JOURNAL OF COMPUTER VISION
Pages27
Corresponding AuthorLi, Bing(bli@nlpr.ia.ac.cn)
Conference Date2022.12.4-2022.12.8
Conference PlaceMacau SAR, China
Funding OrganizationNational Natural Science Foundation of China ; National Key Research and Development Program of China ; Project of Beijing Science and technology Committee ; Beijing Natural Science Foundation ; Major Projects of Guangdong Education Department for Foundation Research and Applied Research ; Guangdong Provincial University Innovation Team Project ; Youth Innovation Promotion Association, CAS
PublisherSPRINGER
Abstract

Transformer attracts much attention because of its ability to learn global relations and superior performance. In order to achieve higher performance, it is natural to distill complementary knowledge from Transformer to convolutional neural network (CNN). However, most existing knowledge distillation methods only consider homologous-architecture distillation, such as distilling knowledge from CNN to CNN. They may not be suitable when applying to cross-architecture scenarios, such as from Transformer to CNN. To deal with this problem, a novel cross-architecture knowledge distillation method is proposed. Specifically, instead of directly mimicking output/intermediate features of the teacher, partially cross attention projector and group-wise linear projector are introduced to align the student features with the teacher's in two projected feature spaces. And a multi-view robust training scheme is further presented to improve the robustness and stability of the framework. Extensive experiments show that the proposed method outperforms 14 state-of-the-arts on both small-scale and large-scale datasets.

KeywordKnowledge distillation Cross architecture Model compression Deep learning
DOI10.1007/s11263-024-02002-0
Indexed BySCI
Funding ProjectNational Natural Science Foundation of China ; National Key Research and Development Program of China[2020AAA0105802] ; National Key Research and Development Program of China[2020AAA0105800] ; Project of Beijing Science and technology Committee[Z231100005923046] ; Beijing Natural Science Foundation[L223003] ; Major Projects of Guangdong Education Department for Foundation Research and Applied Research[2017KZDXM081] ; Major Projects of Guangdong Education Department for Foundation Research and Applied Research[2018KZDXM066] ; Guangdong Provincial University Innovation Team Project[2020KCXTD045] ; Youth Innovation Promotion Association, CAS ; [62192785] ; [62372451] ; [62372082] ; [U1936204] ; [62272125] ; [62306312] ; [62036011] ; [62192782] ; [61721004] ; [U2033210]
Language英语
WOS Research AreaComputer Science
WOS SubjectComputer Science, Artificial Intelligence
WOS IDWOS:001164298900001
Sub direction classification图像视频处理与分析
planning direction of the national heavy laboratory智能计算与学习
Paper associated data
Citation statistics
Document Type会议论文
Identifierhttp://ir.ia.ac.cn/handle/173211/51486
Collection多模态人工智能系统全国重点实验室_视频内容安全
Corresponding AuthorBing Li
Affiliation1.Institute of Automation Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.CAS Center for Excellence in Brain Science and Intelligence Technology
4.PeopleAI, Inc.
5.Ant Financial Service Group
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Corresponding Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Yufan Liu,Jiajiong Cao,Bing Li,et al. Cross-Architecture Knowledge Distillation[C]:SPRINGER,2022:27.
Files in This Item: Download All
File Name/Size DocType Version Access License
ACCV2022_CrossArchKD(1020KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Yufan Liu]'s Articles
[Jiajiong Cao]'s Articles
[Bing Li]'s Articles
Baidu academic
Similar articles in Baidu academic
[Yufan Liu]'s Articles
[Jiajiong Cao]'s Articles
[Bing Li]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Yufan Liu]'s Articles
[Jiajiong Cao]'s Articles
[Bing Li]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: ACCV2022_CrossArchKD.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.