UniVIP: A Unified Framework for Self-Supervised Visual Pre-training
Li, Zhaowen1,2; Zhu, Yousong1; Yang, Fan3; Li, Wei3; Zhao, Chaoyang1,4; Chen, Yingying1; Chen, Zhiyang1,2; Xie, Jiahao5; Wu, Liwei3; Zhao, Rui3,7; Tang, Ming1; Wang, Jinqiao1,2,6
2022-06
会议名称IEEE Computer Vision and Pattern Recognition (CVPR)
会议日期2022-6-19
会议地点New Orleans, Louisiana & Online
摘要

Self-supervised learning (SSL) holds promise in leveraging large amounts of unlabeled data. However, the success of popular SSL methods has limited on single-centricobject images like those in ImageNet and ignores the correlation among the scene and instances, as well as the semantic difference of instances in the scene. To address the
above problems, we propose a Unified Self-supervised Visual Pre-training (UniVIP), a novel self-supervised framework to learn versatile visual representations on either
single-centric-object or non-iconic dataset. The framework takes into account the representation learning at three
levels: 1) the similarity of scene-scene, 2) the correlation of scene-instance, 3) the discrimination of instanceinstance. During the learning, we adopt the optimal transport algorithm to automatically measure the discrimination of instances. Massive experiments show that UniVIP pre-trained on non-iconic COCO achieves state-ofthe-art transfer performance on a variety of downstream
tasks, such as image classification, semi-supervised learning, object detection and segmentation. Furthermore, our
method can also exploit single-centric-object dataset such
as ImageNet and outperforms BYOL by 2.5% with the same
pre-training epochs in linear probing, and surpass current
self-supervised object detection methods on COCO dataset,
demonstrating its universality and potential.

收录类别EI
七大方向——子方向分类多模态智能
文献类型会议论文
条目标识符http://ir.ia.ac.cn/handle/173211/47419
专题紫东太初大模型研究中心_图像与视频分析
作者单位1.National Laboratory of Pattern Recognition, Institute of Automation, CAS, Beijing, China
2.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
3.SenseTime Research
4.Development Research Institute of Guangzhou Smart City, Guangzhou, China
5.S-Lab, Nanyang Technological University
6.Peng Cheng Laboratory, Shenzhen, China
7.Qing Yuan Research Institute, Shanghai Jiao Tong University, Shanghai, China
第一作者单位模式识别国家重点实验室
推荐引用方式
GB/T 7714
Li, Zhaowen,Zhu, Yousong,Yang, Fan,et al. UniVIP: A Unified Framework for Self-Supervised Visual Pre-training[C],2022.
条目包含的文件
文件名称/大小 文献类型 版本类型 开放类型 使用许可
UniVIP A Unified Fra(1929KB)会议论文 开放获取CC BY-NC-SA浏览
个性服务
推荐该条目
保存到收藏夹
查看访问统计
导出为Endnote文件
谷歌学术
谷歌学术中相似的文章
[Li, Zhaowen]的文章
[Zhu, Yousong]的文章
[Yang, Fan]的文章
百度学术
百度学术中相似的文章
[Li, Zhaowen]的文章
[Zhu, Yousong]的文章
[Yang, Fan]的文章
必应学术
必应学术中相似的文章
[Li, Zhaowen]的文章
[Zhu, Yousong]的文章
[Yang, Fan]的文章
相关权益政策
暂无数据
收藏/分享
文件名: UniVIP A Unified Framework for Self-Supervised Visual Pre-training.pdf
格式: Adobe PDF
所有评论 (0)
暂无评论
 

除非特别说明,本系统中所有内容都受版权保护,并保留所有权利。