CASIA OpenIR  > 学术期刊  > Machine Intelligence Research
Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey
Xiao Wang1,2; Guangyao Chen1,3; Guangwu Qian1; Pengcheng Gao1; Xiao-Yong Wei1,4; Yaowei Wang1; Yonghong Tian1,3; Wen Gao1,3
Source PublicationMachine Intelligence Research
ISSN2731-538X
2023
Volume20Issue:4Pages:447-482
Abstract

With the urgent demand for generalized deep models, many pre-trained big models are proposed, such as bidirectional encoder representations (BERT), vision transformer (ViT), generative pre-trained transformers (GPT), etc. Inspired by the success of these models in single domains (like computer vision and natural language processing), the multi-modal pre-trained big models have also drawn more and more attention in recent years. In this work, we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works. Specifically, we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning, pre-training works in natural language process, computer vision, and speech. Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM PTMs), and discuss the MM-PTMs with a focus on data, objectives, network architectures, and knowledge enhanced pre-training. After that, we introduce the downstream tasks used for the validation of large-scale MM-PTMs, including generative, classification, and regression tasks. We also give visualization and analysis of the model parameters and results on representative downstream tasks. Finally, we point out possible research directions for this topic that may benefit future works. In addition, we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models: https://github.com/wangxiao5791509/MultiModal_BigModels_Survey.

KeywordMulti-modal (MM), pre-trained model (PTM), information fusion, representation learning, deep learning
DOI10.1007/s11633-022-1410-8
Sub direction classification其他
planning direction of the national heavy laboratory其他
Paper associated data
Chinese guidehttps://mp.weixin.qq.com/s/yX1DdDCA-nMluzOB6Qz3sw
Video parsinghttps://www.bilibili.com/video/BV1AC4y127eY/
Citation statistics
Cited Times:15[WOS]   [WOS Record]     [Related Records in WOS]
Document Type期刊论文
Identifierhttp://ir.ia.ac.cn/handle/173211/55990
Collection学术期刊_Machine Intelligence Research
Affiliation1.Peng Cheng Laboratory, Shenzhen 518055, China
2.School of Computer Science and Technology, Anhui University, Hefei 230601, China
3.School of Computer Science, Peking University, Beijing 100871, China
4.College of Computer Science, Sichuan University, Chengdu 610065, China
Recommended Citation
GB/T 7714
Xiao Wang,Guangyao Chen,Guangwu Qian,et al. Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey[J]. Machine Intelligence Research,2023,20(4):447-482.
APA Xiao Wang.,Guangyao Chen.,Guangwu Qian.,Pengcheng Gao.,Xiao-Yong Wei.,...&Wen Gao.(2023).Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey.Machine Intelligence Research,20(4),447-482.
MLA Xiao Wang,et al."Large-scale Multi-modal Pre-trained Models: A Comprehensive Survey".Machine Intelligence Research 20.4(2023):447-482.
Files in This Item:
File Name/Size DocType Version Access License
MIR-2022-07-224.pdf(3540KB)期刊论文出版稿开放获取CC BY-NC-SAView
Related Services
Recommend this item
Bookmark
Usage statistics
Export to Endnote
Google Scholar
Similar articles in Google Scholar
[Xiao Wang]'s Articles
[Guangyao Chen]'s Articles
[Guangwu Qian]'s Articles
Baidu academic
Similar articles in Baidu academic
[Xiao Wang]'s Articles
[Guangyao Chen]'s Articles
[Guangwu Qian]'s Articles
Bing Scholar
Similar articles in Bing Scholar
[Xiao Wang]'s Articles
[Guangyao Chen]'s Articles
[Guangwu Qian]'s Articles
Terms of Use
No data!
Social Bookmark/Share
File name: MIR-2022-07-224.pdf
Format: Adobe PDF
All comments (0)
No comment.
 

Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.