CASIA OpenIR  > 模式识别国家重点实验室  > 多媒体计算
Self-Supervised Feature Augmentation for Large Image Object Detection
Pan, Xingjia1,1; Tang, Fan3; Dong, Weiming1; Gu, Yang6; Song, Zhichao5; Meng, Yiping5; Xu, Pengfei5; Oliver, Deussen4; Xu, Changsheng1,2
Source PublicationIEEE Transactions on Image Processing
2020-05-14
Volume29Issue:0Pages:6745-6758
Abstract

Input scale plays an important role in modern
detection frameworks, and an optimal training scale for images
exists empirically. However, the optimal one usually cannot be
reached in facing extremely large images under the memory
constraint. In this study, we explore the scale effect inside the
object detection pipeline and find that feature upsampling with
the introduction of high-resolution information benefits the detection.
Compared with direct input upscaling, feature upsampling
trades a small performance loss for a large amount of memory
savings. From these observations, we propose a self-supervised
feature augmentation network, which takes downsampled images
as inputs and aims to generate comparable features with the
ones when feeding upscaled images to networks. We present a
guided feature upsampling module, which takes downsampled
images as inputs, to learn upscaled feature representations with
the supervision of real large features acquired from upscaled
images. In a self-supervised learning manner, we can introduce
detailed information of images to the network. For an efficient
feature upsampling, we design a residualized sub-pixel convolution
block based on a sub-pixel convolution layer, which involves
considerable information in upsampling process. Experiments on
Mapillary Vistas Dataset (MVD), Cityscapes, and COCO are
conducted to demonstrate the effectiveness of our method. On the
MVD and Cityscapes detection benchmarks, in which the images
are extremely large, our method surpasses current approaches.
On COCO, the proposed method obtains comparable results to
existing methods but with higher efficiency.

Keywordobject detection large image self-supervise feature augmentation
MOST Discipline Catalogue工学::计算机科学与技术(可授工学、理学学位)
DOI10.1109/TIP.2020.2993403
Indexed BySCI
Language英语
Citation statistics
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/41615
Collection模式识别国家重点实验室_多媒体计算
Corresponding AuthorDong, Weiming
Affiliation1.NLPR, Institute of Automation, Chinese Academy of Sciences
2.University of Chinese Academy of Sciences
3.Jilin University
4.University of Konstanz
5.Didi Chuxing
6.Moment.ai
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Corresponding Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Pan, Xingjia,Tang, Fan,Dong, Weiming,et al. Self-Supervised Feature Augmentation for Large Image Object Detection[J]. IEEE Transactions on Image Processing,2020,29(0):6745-6758.
APA Pan, Xingjia.,Tang, Fan.,Dong, Weiming.,Gu, Yang.,Song, Zhichao.,...&Xu, Changsheng.(2020).Self-Supervised Feature Augmentation for Large Image Object Detection.IEEE Transactions on Image Processing,29(0),6745-6758.
MLA Pan, Xingjia,et al."Self-Supervised Feature Augmentation for Large Image Object Detection".IEEE Transactions on Image Processing 29.0(2020):6745-6758.
Files in This Item: Download All
File Name/Size DocType Version Access License
09094016.pdf(5411KB)期刊论文作者接受稿开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Pan, Xingjia]'s Articles
[Tang, Fan]'s Articles
[Dong, Weiming]'s Articles
Baidu academic
Similar articles in Baidu academic
[Pan, Xingjia]'s Articles
[Tang, Fan]'s Articles
[Dong, Weiming]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Pan, Xingjia]'s Articles
[Tang, Fan]'s Articles
[Dong, Weiming]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: 09094016.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.