Institutional Repository of Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
An Efficient Sampling-Based Attention Network for Semantic Segmentation | |
He, Xingjian1,2![]() ![]() ![]() ![]() | |
Source Publication | IEEE TRANSACTIONS ON IMAGE PROCESSING
![]() |
ISSN | 1057-7149 |
2022 | |
Volume | 31Pages:2850-2863 |
Corresponding Author | Liu, Jing(jliu@nlpr.ia.ac.cn) |
Abstract | Self-attention is widely explored to model long-range dependencies in semantic segmentation. However, this operation computes pair-wise relationships between the query point and all other points, leading to prohibitive complexity. In this paper, we propose an efficient Sampling-based Attention Network which combines a novel sample method with an attention mechanism for semantic segmentation. Specifically, we design a Stochastic Sampling-based Attention Module (SSAM) to capture the relationships between the query point and a stochastic sampled representative subset from a global perspective, where the sampled subset is selected by a Stochastic Sampling Module. Compared to self-attention, our SSAM achieves comparable segmentation performance while significantly reducing computational redundancy. In addition, with the observation that not all pixels are interested in the contextual information, we design a Deterministic Sampling-based Attention Module (DSAM) to sample features from a local region for obtaining the detailed information. Extensive experiments demonstrate that our proposed method can compete or perform favorably against the state-of-the-art methods on the Cityscapes, ADE20K, COCO Stuff, and PASCAL Context datasets. |
Keyword | Stochastic processes Sampling methods Semantics Image segmentation Computational complexity Pattern recognition Convolution Semantic segmentation stochastic sampling-based attention deterministic sampling-based attention |
DOI | 10.1109/TIP.2022.3162101 |
Indexed By | SCI |
Language | 英语 |
Funding Project | National Key Research and Development Program of China[2020AAA0106400] ; National Natural Science Foundation of China[61922086] ; National Natural Science Foundation of China[61872366] ; National Natural Science Foundation of China[U21B2043] |
Funding Organization | National Key Research and Development Program of China ; National Natural Science Foundation of China |
WOS Research Area | Computer Science ; Engineering |
WOS Subject | Computer Science, Artificial Intelligence ; Engineering, Electrical & Electronic |
WOS ID | WOS:000778905000007 |
Publisher | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
Sub direction classification | 图像视频处理与分析 |
Citation statistics | |
Document Type | 期刊论文 |
Identifier | http://ir.ia.ac.cn/handle/173211/48290 |
Collection | 模式识别国家重点实验室_图像与视频分析 |
Corresponding Author | Liu, Jing |
Affiliation | 1.Univ Chinese Acad Sci, Sch Artificial Intelligence, Beijing 100049, Peoples R China 2.Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China |
First Author Affilication | Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China |
Corresponding Author Affilication | Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China |
Recommended Citation GB/T 7714 | He, Xingjian,Liu, Jing,Wang, Weining,et al. An Efficient Sampling-Based Attention Network for Semantic Segmentation[J]. IEEE TRANSACTIONS ON IMAGE PROCESSING,2022,31:2850-2863. |
APA | He, Xingjian,Liu, Jing,Wang, Weining,&Lu, Hanqing.(2022).An Efficient Sampling-Based Attention Network for Semantic Segmentation.IEEE TRANSACTIONS ON IMAGE PROCESSING,31,2850-2863. |
MLA | He, Xingjian,et al."An Efficient Sampling-Based Attention Network for Semantic Segmentation".IEEE TRANSACTIONS ON IMAGE PROCESSING 31(2022):2850-2863. |
Files in This Item: | Download All | |||||
File Name/Size | DocType | Version | Access | License | ||
An_Efficient_Samplin(3252KB) | 期刊论文 | 作者接受稿 | 开放获取 | CC BY-NC-SA | View Download |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment