CASIA OpenIR  > 紫东太初大模型研究中心  > 图像与视频分析
UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
Li, Zhaowen1,2; Zhu, Yousong1; Yang, Fan3; Li, Wei3; Zhao, Chaoyang1,4; Chen, Yingying1; Chen, Zhiyang1,2; Xie, Jiahao5; Wu, Liwei3; Zhao, Rui3,7; Tang, Ming1; Wang, Jinqiao1,2,6
Conference NameIEEE Computer Vision and Pattern Recognition (CVPR)
Conference Date2022-6-19
Conference PlaceNew Orleans, Louisiana & Online

Self-supervised learning (SSL) holds promise in leveraging large amounts of unlabeled data. However, the success of popular SSL methods has limited on single-centricobject images like those in ImageNet and ignores the correlation among the scene and instances, as well as the semantic difference of instances in the scene. To address the
above problems, we propose a Unified Self-supervised Visual Pre-training (UniVIP), a novel self-supervised framework to learn versatile visual representations on either
single-centric-object or non-iconic dataset. The framework takes into account the representation learning at three
levels: 1) the similarity of scene-scene, 2) the correlation of scene-instance, 3) the discrimination of instanceinstance. During the learning, we adopt the optimal transport algorithm to automatically measure the discrimination of instances. Massive experiments show that UniVIP pre-trained on non-iconic COCO achieves state-ofthe-art transfer performance on a variety of downstream
tasks, such as image classification, semi-supervised learning, object detection and segmentation. Furthermore, our
method can also exploit single-centric-object dataset such
as ImageNet and outperforms BYOL by 2.5% with the same
pre-training epochs in linear probing, and surpass current
self-supervised object detection methods on COCO dataset,
demonstrating its universality and potential.

Indexed ByEI
Sub direction classification多模态智能
Document Type会议论文
Affiliation1.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.SenseTime Research
4.Development Research Institute of Guangzhou Smart City, Guangzhou, China
5.S-Lab, Nanyang Technological University
6.Peng Cheng Laboratory, Shenzhen, China
7.Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai, China
First Author AffilicationChinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing 100190, Peoples R China
Recommended Citation
GB/T 7714
Li, Zhaowen,Zhu, Yousong,Yang, Fan,et al. UniVIP: A Unified Framework for Self-Supervised Visual Pre-training[C],2022.
Files in This Item: Download All
File Name/Size DocType Version Access License
UniVIP A Unified Fra(1929KB)会议论文 开放获取CC BY-NC-SAView Download
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Li, Zhaowen]'s Articles
[Zhu, Yousong]'s Articles
[Yang, Fan]'s Articles
Baidu academic
Similar articles in Baidu academic
[Li, Zhaowen]'s Articles
[Zhu, Yousong]'s Articles
[Yang, Fan]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Li, Zhaowen]'s Articles
[Zhu, Yousong]'s Articles
[Yang, Fan]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: UniVIP A Unified Framework for Self-Supervised Visual Pre-training.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.