Vision-and-Language Pretrained Models: A Survey

04/15/2022
by   Siqu Long, et al.
0

Pretrained models have produced great success in both Computer Vision (CV) and Natural Language Processing (NLP). This progress leads to learning joint representations of vision and language pretraining by feeding visual and linguistic contents into a multi-layer transformer, Visual-Language Pretrained Models (VLPMs). In this paper, we present an overview of the major advances achieved in VLPMs for producing joint representations of vision and language. As the preliminaries, we briefly describe the general task definition and genetic architecture of VLPMs. We first discuss the language and vision data encoding methods and then present the mainstream VLPM structure as the core content. We further summarise several essential pretraining and fine-tuning strategies. Finally, we highlight three future directions for both CV and NLP researchers to provide insightful guidance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/21/2021

Pretrained Language Models for Text Generation: A Survey

Text generation has become one of the most important yet challenging tas...
research
07/06/2023

Vision Language Transformers: A Survey

Vision language tasks, such as answering questions about or generating c...
research
11/30/2020

Multimodal Pretraining Unmasked: Unifying the Vision and Language BERTs

Large-scale pretraining and task-specific fine-tuning is now the standar...
research
02/13/2023

Implications of the Convergence of Language and Vision Model Geometries

Large-scale pretrained language models (LMs) are said to “lack the abili...
research
09/16/2023

RMP: A Random Mask Pretrain Framework for Motion Prediction

As the pretraining technique is growing in popularity, little work has b...
research
06/14/2023

Towards AGI in Computer Vision: Lessons Learned from GPT and Large Language Models

The AI community has been pursuing algorithms known as artificial genera...
research
09/19/2018

Interpretable Textual Neuron Representations for NLP

Input optimization methods, such as Google Deep Dream, create interpreta...

Please sign up or login with your details

Forgot password? Click here to reset