CASIA OpenIR  > 模式识别国家重点实验室  > 多媒体计算
Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer
Xu, Yifan1,4; Zhang, Zhijie2; Zhang, Mengdan3; Sheng, Kekai3; Li, Ke3; Dong, Weiming1,4; Zhang, Liqing2; Xu, Changsheng1,4; Sun, Xing3
Conference NameThe AAAI Conference on Artificial Intelligence
Conference Date2022-3-1
Conference PlaceOnline

Vision transformers (ViTs) have attracted considerable research attention recently, but the huge computational cost is still a severe issue. A mainstream paradigm for computation reduction aims to reduce the number of tokens given that the computation complexity of ViT is quadratic with respect to the input sequence length. Existing designs include structured spatial compression that uses a progressive shrinking pyramid to reduce the computations of large feature maps, and unstructured token pruning that dynamically drops redundant tokens. However, limitations of existing token pruning lie in the following aspects: 1) the incomplete spatial structure caused by pruning is incompatible with structured spatial compression that is commonly used in modern deep-narrow transformers; 2) it usually requires a time-consuming pretraining procedure. To address the limitations and expand the applicable scenario of token pruning, we present Evo-ViT, a self-motivated slow-fast token evolution approach for vision transformers. Specifically, we conduct unstructured instancewise token selection by taking advantage of the simple and effective global class attention that is native to vision transformers. Then, we propose to update the selected informative tokens and uninformative tokens with different computation paths, namely, slow-fast updating. Since slow-fast updating mechanism maintains the spatial structure and information flow, Evo-ViT can accelerate vanilla transformers of both flat and deep-narrow structures from the very beginning of the training process. Experimental results demonstrated that our method significantly reduces the computational cost of vision transformers while maintaining comparable performance on image classification. For example, our method accelerates DeiT-S by over 60% throughput while only sacrificing 0.4% top-1 accuracy on ImageNet-1K, outperforming current token pruning methods on both accuracy and efficiency. Code is available at

Indexed ByEI
Document Type会议论文
Corresponding AuthorDong, Weiming
Affiliation1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.Shanghai Jiao Tong University
3.Tencent Youtu Lab
4.School of Artificial Intelligence, University of Chinese Academy of Sciences
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Xu, Yifan,Zhang, Zhijie,Zhang, Mengdan,et al. Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer[C],2022.
Files in This Item: Download All
File Name/Size DocType Version Access License
Evo_ViT_AAAI22.pdf(636KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xu, Yifan]'s Articles
[Zhang, Zhijie]'s Articles
[Zhang, Mengdan]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xu, Yifan]'s Articles
[Zhang, Zhijie]'s Articles
[Zhang, Mengdan]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xu, Yifan]'s Articles
[Zhang, Zhijie]'s Articles
[Zhang, Mengdan]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: Evo_ViT_AAAI22.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.