Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

02/20/2023
by   Xiao Wang, et al.
1

With the urgent demand for generalized deep models, many pre-trained big models are proposed, such as BERT, ViT, GPT, etc. Inspired by the success of these models in single domains (like computer vision and natural language processing), the multi-modal pre-trained big models have also drawn more and more attention in recent years. In this work, we give a comprehensive survey of these models and hope this paper could provide new insights and helps fresh researchers to track the most cutting-edge works. Specifically, we firstly introduce the background of multi-modal pre-training by reviewing the conventional deep learning, pre-training works in natural language process, computer vision, and speech. Then, we introduce the task definition, key challenges, and advantages of multi-modal pre-training models (MM-PTMs), and discuss the MM-PTMs with a focus on data, objectives, network architectures, and knowledge enhanced pre-training. After that, we introduce the downstream tasks used for the validation of large-scale MM-PTMs, including generative, classification, and regression tasks. We also give visualization and analysis of the model parameters and results on representative downstream tasks. Finally, we point out possible research directions for this topic that may benefit future works. In addition, we maintain a continuously updated paper list for large-scale pre-trained multi-modal big models: https://github.com/wangxiao5791509/MultiModal_BigModels_Survey

READ FULL TEXT

page 4

page 25

research
02/18/2022

VLP: A Survey on Vision-Language Pre-training

In the past few years, the emergence of pre-training models has brought ...
research
08/17/2023

A Survey on Deep Multi-modal Learning for Body Language Recognition and Generation

Body language (BL) refers to the non-verbal communication expressed thro...
research
01/11/2023

A Multi-Modal Geographic Pre-Training Method

As a core task in location-based services (LBS) (e.g., navigation maps),...
research
04/13/2023

Efficient Multimodal Fusion via Interactive Prompting

Large-scale pre-training has brought unimodal fields such as computer vi...
research
09/09/2020

Exploiting Multi-Modal Features From Pre-trained Networks for Alzheimer's Dementia Recognition

Collecting and accessing a large amount of medical data is very time-con...
research
07/28/2020

Families In Wild Multimedia (FIW-MM): A Multi-Modal Database for Recognizing Kinship

Recognizing kinship - a soft biometric with vast applications - in photo...
research
06/14/2021

Pre-Trained Models: Past, Present and Future

Large-scale pre-trained models (PTMs) such as BERT and GPT have recently...

Please sign up or login with your details

Forgot password? Click here to reset