BViT: Broad Attention-Based Vision Transformer
Nannan Li1,2; Yaran Chen1,2; Weifan Li1,2; Zixiang Ding1,2; Dongbin Zhao1,2; Shuai Nie3
发表期刊IEEE Transactions on Neural Networks and Learning Systems
ISSN2162-237X
2023-05
页码1 - 12
摘要

Recent works have demonstrated that transformer can achieve promising performance in computer vision, by exploiting the relationship among image patches with self-attention. They only consider the attention in a single feature layer, but ignore the complementarity of attention in different layers. In this article, we propose broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer (ViT), which is called BViT. The broad attention is implemented by broad connection and parameter-free attention. Broad connection of each transformer layer promotes the transmission and integration of information for BViT. Without introducing additional trainable parameters, parameter-free attention jointly focuses on the already available attention information in different layers for extracting useful information and building their relationship. Experiments on image classification tasks demonstrate that BViT delivers superior accuracy of 75.0%/81.6% top-1 accuracy on ImageNet with 5M/22M parameters. Moreover, we transfer BViT to downstream object recognition benchmarks to achieve 98.9% and 89.9% on CIFAR10 and CIFAR100, respectively, that exceed ViT with fewer parameters. For the generalization test, the broad attention in Swin Transformer, T2T-ViT and LVT also brings an improvement of more than 1%. To sum up, broad attention is promising to promote the performance of attention-based models. Code and pretrained models are available at https://github.com/DRL/BViT.

关键词Broad attention broad connection image classification parameter-free attention vision transformer
DOI10.1109/TNNLS.2023.3264730
URL查看原文
收录类别SCI
语种英语
WOS记录号WOS:000982500800001
七大方向——子方向分类机器学习
国重实验室规划方向分类智能计算与学习
是否有论文关联数据集需要存交
引用统计
被引频次:6[WOS]   [WOS记录]     [WOS相关记录]
文献类型期刊论文
条目标识符http://ir.ia.ac.cn/handle/173211/52190
专题多模态人工智能系统全国重点实验室_深度强化学习
作者单位1.The State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences
2.School of artificial intelligence, University of Chinese Academy of Sciences
3.The National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
第一作者单位中国科学院自动化研究所
推荐引用方式
GB/T 7714
Nannan Li,Yaran Chen,Weifan Li,et al. BViT: Broad Attention-Based Vision Transformer[J]. IEEE Transactions on Neural Networks and Learning Systems,2023:1 - 12.
APA Nannan Li,Yaran Chen,Weifan Li,Zixiang Ding,Dongbin Zhao,&Shuai Nie.(2023).BViT: Broad Attention-Based Vision Transformer.IEEE Transactions on Neural Networks and Learning Systems,1 - 12.
MLA Nannan Li,et al."BViT: Broad Attention-Based Vision Transformer".IEEE Transactions on Neural Networks and Learning Systems (2023):1 - 12.
条目包含的文件 下载所有文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
BViT_Broad_Attention(2171KB)期刊论文作者接受稿开放获取CC BY-NC-SA浏览 下载
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Weifan Li]的文章
百度学术
百度学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Weifan Li]的文章
必应学术
必应学术中相似的文章
[Nannan Li]的文章
[Yaran Chen]的文章
[Weifan Li]的文章
相关权益政策
暂无数据
收藏/分享
文件名: BViT_Broad_Attention-Based_Vision_Transformer.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。