TR-MISR: Multiimage super-resolution based on feature fusion with transformers
An T(安泰)1,2; Zhang X(张鑫)1,2; Huo CL(霍春雷)1,2; Xue B(薛斌)1,2; Wang LF(汪凌峰)1,2; Pan CH(潘春洪)1,2
Source PublicationIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
2022-01
Volume15Pages:1373-1388
Contribution Rank1
Abstract

Multiimage super-resolution (MISR), as one of the most promising directions in remote sensing, has become a needy technique in the satellite market. A sequence of images collected by satellites often has plenty of views and a long time span, so integrating multiple low-resolution views into a high-resolution image with details emerges as a challenging problem. However, most MISR methods based on deep learning cannot make full use of multiple images. Their fusion modules are incapable of adapting to an image sequence with weak temporal correlations well. To cope with these problems, we propose a novel end-to-end framework called TR-MISR. It consists of three parts: An encoder based on residual blocks, a transformer-based fusion module, and a decoder based on subpixel convolution. Specifically, by rearranging multiple feature maps into vectors, the fusion module can assign dynamic attention to the same area of different satellite images simultaneously. In addition, TR-MISR adopts an additional learnable embedding vector that fuses these vectors to restore the details to the greatest extent.TR-MISR has successfully applied the transformer to MISR tasks for the first time, notably reducing the difficulty of training the transformer by ignoring the spatial relations of image patches. Extensive experiments performed on the PROBA-V Kelvin dataset demonstrate the superiority of the proposed model that provides an effective method for transformers in other low-level vision tasks.

KeywordDeep learning end-to-end networks feature extraction and fusion multiimage super-resolution (MISR) remote sensing transformers
Indexed BySCI
Language英语
IS Representative Paper
Sub direction classification图像视频处理与分析
planning direction of the national heavy laboratory多尺度信息处理
Paper associated data
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/54532
Collection多模态人工智能系统全国重点实验室_先进时空数据分析与学习
Corresponding AuthorHuo CL(霍春雷)
Affiliation1.National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences
2.School of Artificial Intelligence, University of Chinese Academy of Sciences
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
An T,Zhang X,Huo CL,et al. TR-MISR: Multiimage super-resolution based on feature fusion with transformers[J]. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2022,15:1373-1388.
APA An T,Zhang X,Huo CL,Xue B,Wang LF,&Pan CH.(2022).TR-MISR: Multiimage super-resolution based on feature fusion with transformers.IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,15,1373-1388.
MLA An T,et al."TR-MISR: Multiimage super-resolution based on feature fusion with transformers".IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 15(2022):1373-1388.
Files in This Item:
File Name/Size DocType Version Access License
TR-MISR.pdf(6058KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[An T(安泰)]'s Articles
[Zhang X(张鑫)]'s Articles
[Huo CL(霍春雷)]'s Articles
Baidu academic
Similar articles in Baidu academic
[An T(安泰)]'s Articles
[Zhang X(张鑫)]'s Articles
[Huo CL(霍春雷)]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[An T(安泰)]'s Articles
[Zhang X(张鑫)]'s Articles
[Huo CL(霍春雷)]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: TR-MISR.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.