Adaptive Search for Broad Attention based Vision Transformers
Nannan Li1,2; Yaran Chen1,2; Dongbin Zhao1,2
发表期刊IEEE Transactions on Evolutionary Computation
2023
页码0-0
摘要

In recent years, Vision Transformer (ViT) has prevailed among computer vision tasks for its powerful capability of image representation. Frustratingly, the manual design of efficient architectures for ViTs can be time-consuming, often requiring repetitive trial and error. Moreover, existing lightweight ViTs have not been thoroughly explored, leading to weaker performance compared to convolutional neural networks. To address these challenges, we propose Adaptive Search for Broad attention based Vision Transformers, called ASB, which incorporates broad attention and adaptive neural architecture evolution to strengthen light-weight ViTs. The inclusion of broad attention within the search space allows us to explore novel architectures that can significantly enhance the performance of light-weight ViTs by providing more comprehensive attention information. We also design an efficient adaptive evolutionary algorithm to explore effective architectures by dynamically adjusting the probability distribution of candidate mutation operators. Our experimental results show that the adaptive evolution in ASB can efficiently learn excellent light-weight models, achieving a 55% improvement in convergence speed over traditional evolutionary algorithms. Moreover, the effectiveness of ASB is demonstrated in several visual tasks, including image classification, mobile COCO panoptic segmentation, and mobile ADE20K semantic segmentation. For instance, on ImageNet, searched model achieves 77.8% performance with 6.5M parameters, resulting in a 0.7% accuracy improvement over the state-of-the-art EfficientNet-B0. On mobile COCO panoptic segmentation, our approach outperforms prevalent MobileNetV2 by 7.4% PQ. On mobile ADE20K semantic segmentation, our method attains 40.9% mIoU, which exceeds MobileNetV2 with 6.9% mIoU.

语种英语
七大方向——子方向分类强化与进化学习
国重实验室规划方向分类智能计算与学习
是否有论文关联数据集需要存交
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/52214
专题多模态人工智能系统全国重点实验室_深度强化学习
作者单位1.The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences
2.School of artificial intelligence, University of Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Nannan Li,Yaran Chen,Dongbin Zhao. Adaptive Search for Broad Attention based Vision Transformers[J]. IEEE Transactions on Evolutionary Computation,2023:0-0.
APA Nannan Li,Yaran Chen,&Dongbin Zhao.(2023).Adaptive Search for Broad Attention based Vision Transformers.IEEE Transactions on Evolutionary Computation,0-0.
MLA Nannan Li,et al."Adaptive Search for Broad Attention based Vision Transformers".IEEE Transactions on Evolutionary Computation (2023):0-0.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
ENAS-AS-ViTv10.pdf(824KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Dongbin Zhao]的文章
百度学术
百度学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Dongbin Zhao]的文章
必应学术
必应学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Dongbin Zhao]的文章
相关权益政策
暂无数据
收藏/分享
文件名: ENAS-AS-ViTv10.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。