CASIA OpenIR  > 学术期刊  > Machine Intelligence Research
VLP: A Survey on Vision-language Pre-training
Fei-Long Chen1,2
Source PublicationMachine Intelligence Research
AbstractIn the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown that they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances in five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey focused on VLP. We hope that this survey can shed light on future research in the VLP field.
KeywordVision and language pre-training transformers multimodal learning representation learning
Citation statistics
Document Type期刊论文
Collection学术期刊_Machine Intelligence Research
Affiliation1.Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2.School of Future Technology, University of Chinese Academy of Sciences, Beijing 100049, China
3.School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 100049, China
First Author AffilicationInstitute of Automation, Chinese Academy of Sciences
Recommended Citation
GB/T 7714
Fei-Long Chen. VLP: A Survey on Vision-language Pre-training[J]. Machine Intelligence Research,2023,20(1):38-56.
APA Fei-Long Chen.(2023).VLP: A Survey on Vision-language Pre-training.Machine Intelligence Research,20(1),38-56.
MLA Fei-Long Chen."VLP: A Survey on Vision-language Pre-training".Machine Intelligence Research 20.1(2023):38-56.
Files in This Item:
File Name/Size DocType Version Access License
MIR-2022-06-193.pdf(1427KB)期刊论文出版稿开放获取CC BY-NC-SAView
Related Services
Recommend this item
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Fei-Long Chen]'s Articles
Baidu academic
Similar articles in Baidu academic
[Fei-Long Chen]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Fei-Long Chen]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: MIR-2022-06-193.pdf
Format: Adobe PDF
All comments (0)
No comment.

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.