A Closer Look at Self-Supervised Lightweight Vision Transformers
Wang, Shaoru1,2; Gao, Jin1,2; Li, Zeming3; Zhang, Xiaoqin4; Weiming, Hu1,2,5
2023-07
会议名称International Conference on Machine Learning
会议日期2023-7
会议地点Honolulu, Hawaii, USA
摘要

Self-supervised learning on large-scale Vision Transformers (ViTs) as pre-training methods has achieved promising downstream performance. Yet, how much these pre-training paradigms promote lightweight ViTs' performance is considerably less studied. In this work, we develop and benchmark several self-supervised pre-training methods on image classification tasks and some downstream dense prediction tasks. We surprisingly find that if proper pre-training is adopted, even vanilla lightweight ViTs show comparable performance to previous SOTA networks with delicate architecture design. It breaks the recently popular conception that vanilla ViTs are not suitable for vision tasks in lightweight regimes. We also point out some defects of such pre-training, e.g., failing to benefit from large-scale pre-training data and showing inferior performance on data-insufficient downstream tasks. Furthermore, we analyze and clearly show the effect of such pre-training by analyzing the properties of the layer representation and attention maps for related models. Finally, based on the above analyses, a distillation strategy during pre-training is developed, which leads to further downstream performance improvement for MAE-based pre-training. Code is available at https://github.com/wangsr126/mae-lite.

关键词Vision Transformer Self-supervised Learning Lightweight Networks Knowledge Distillation
收录类别EI
语种英语
七大方向——子方向分类目标检测、跟踪与识别
国重实验室规划方向分类视觉信息处理
是否有论文关联数据集需要存交
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/52415
专题多模态人工智能系统全国重点实验室_视频内容安全
通讯作者Gao, Jin
作者单位1.Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
3.Megvii Research
4.Wenzhou University
5.ShanghaiTech University
第一作者单位中国科学院自动化研究所
通讯作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Wang, Shaoru,Gao, Jin,Li, Zeming,et al. A Closer Look at Self-Supervised Lightweight Vision Transformers[C],2023.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
ICML2023-MAE_Lite-ca(3478KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Wang, Shaoru]的文章
[Gao, Jin]的文章
[Li, Zeming]的文章
百度学术
百度学术中相似的文章
[Wang, Shaoru]的文章
[Gao, Jin]的文章
[Li, Zeming]的文章
必应学术
必应学术中相似的文章
[Wang, Shaoru]的文章
[Gao, Jin]的文章
[Li, Zeming]的文章
相关权益政策
暂无数据
收藏/分享
文件名: ICML2023-MAE_Lite-camera_ready.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。