VLP: A Survey on Vision-Language Pre-training

02/18/2022
by   Feilong Chen, et al.
0

In the past few years, the emergence of pre-training models has brought uni-modal fields such as computer vision (CV) and natural language processing (NLP) to a new era. Substantial works have shown they are beneficial for downstream uni-modal tasks and avoid training a new model from scratch. So can such pre-trained models be applied to multi-modal tasks? Researchers have explored this problem and made significant progress. This paper surveys recent advances and new frontiers in vision-language pre-training (VLP), including image-text and video-text pre-training. To give readers a better overall grasp of VLP, we first review its recent advances from five aspects: feature extraction, model architecture, pre-training objectives, pre-training datasets, and downstream tasks. Then, we summarize the specific VLP models in detail. Finally, we discuss the new frontiers in VLP. To the best of our knowledge, this is the first survey on VLP. We hope that this survey can shed light on future research in the VLP field.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/18/2022

A Survey of Vision-Language Pre-Trained Models

As Transformer evolved, pre-trained models have advanced at a breakneck ...
research
02/20/2023

Large-scale Multi-Modal Pre-trained Models: A Comprehensive Survey

With the urgent demand for generalized deep models, many pre-trained big...
research
06/02/2023

Recent Advances of Local Mechanisms in Computer Vision: A Survey and Outlook of Recent Work

Inspired by the fact that human brains can emphasize discriminative part...
research
03/04/2021

A Survey on Spoken Language Understanding: Recent Advances and New Frontiers

Spoken Language Understanding (SLU) aims to extract the semantics frame ...
research
05/30/2022

VLUE: A Multi-Task Benchmark for Evaluating Vision-Language Models

Recent advances in vision-language pre-training (VLP) have demonstrated ...
research
04/13/2023

Efficient Multimodal Fusion via Interactive Prompting

Large-scale pre-training has brought unimodal fields such as computer vi...
research
11/03/2022

Eliciting Knowledge from Large Pre-Trained Models for Unsupervised Knowledge-Grounded Conversation

Recent advances in large-scale pre-training provide large models with th...

Please sign up or login with your details

Forgot password? Click here to reset