Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
Xu, Yifan1,4; Zhang, Zhijie2; Zhang, Mengdan3; Sheng, Kekai3; Li, Ke3; Dong, Weiming1,4; Zhang, Liqing2; Xu, Changsheng1,4; Sun, Xing3
2022-03
会议名称The AAAI Conference on Artificial Intelligence
会议日期2022-3-1
会议地点Online
摘要

Vision transformers (ViTs) have attracted considerable research attention recently, but the huge computational cost is still a severe issue. A mainstream paradigm for computation reduction aims to reduce the number of tokens given that the computation complexity of ViT is quadratic with respect to the input sequence length. Existing designs include structured spatial compression that uses a progressive shrinking pyramid to reduce the computations of large feature maps, and unstructured token pruning that dynamically drops redundant tokens. However, limitations of existing token pruning lie in the following aspects: 1) the incomplete spatial structure caused by pruning is incompatible with structured spatial compression that is commonly used in modern deep-narrow transformers; 2) it usually requires a time-consuming pretraining procedure. To address the limitations and expand the applicable scenario of token pruning, we present Evo-ViT, a self-motivated slow-fast token evolution approach for vision transformers. Specifically, we conduct unstructured instancewise token selection by taking advantage of the simple and effective global class attention that is native to vision transformers. Then, we propose to update the selected informative tokens and uninformative tokens with different computation paths, namely, slow-fast updating. Since slow-fast updating mechanism maintains the spatial structure and information flow, Evo-ViT can accelerate vanilla transformers of both flat and deep-narrow structures from the very beginning of the training process. Experimental results demonstrated that our method significantly reduces the computational cost of vision transformers while maintaining comparable performance on image classification. For example, our method accelerates DeiT-S by over 60% throughput while only sacrificing 0.4% top-1 accuracy on ImageNet-1K, outperforming current token pruning methods on both accuracy and efficiency. Code is available at https://github.com/YifanXu74/Evo-ViT.

收录类别EI
语种英语
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/48596
专题多模态人工智能系统全国重点实验室_多媒体计算
通讯作者Dong, Weiming
作者单位1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.Shanghai Jiao Tong University
3.Tencent Youtu Lab
4.School of Artificial Intelligence, University of Chinese Academy of Sciences
第一作者单位模式识别国家重点实验室
通讯作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Xu, Yifan,Zhang, Zhijie,Zhang, Mengdan,et al. Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer[C],2022.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
Evo_ViT_AAAI22.pdf(636KB)会议论文 开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Xu, Yifan]的文章
[Zhang, Zhijie]的文章
[Zhang, Mengdan]的文章
百度学术
百度学术中相似的文章
[Xu, Yifan]的文章
[Zhang, Zhijie]的文章
[Zhang, Mengdan]的文章
必应学术
必应学术中相似的文章
[Xu, Yifan]的文章
[Zhang, Zhijie]的文章
[Zhang, Mengdan]的文章
相关权益政策
暂无数据
收藏/分享
文件名: Evo_ViT_AAAI22.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。