Generative Negative Text Replay for Continual Vision-Language Pretraining

10/31/2022
by   Shipeng Yan, et al.
0

Vision-language pre-training (VLP) has attracted increasing attention recently. With a large amount of image-text pairs, VLP models trained with contrastive loss have achieved impressive performance in various tasks, especially the zero-shot generalization on downstream datasets. In practical applications, however, massive data are usually collected in a streaming fashion, requiring VLP models to continuously integrate novel knowledge from incoming data and retain learned knowledge. In this work, we focus on learning a VLP model with sequential chunks of image-text pair data. To tackle the catastrophic forgetting issue in this multi-modal continual learning setting, we first introduce pseudo text replay that generates hard negative texts conditioned on the training images in memory, which not only better preserves learned knowledge but also improves the diversity of negative samples in the contrastive loss. Moreover, we propose multi-modal knowledge distillation between images and texts to align the instance-wise prediction between old and new models. We incrementally pre-train our model on both the instance and class incremental splits of the Conceptual Caption dataset, and evaluate the model on zero-shot image classification and image-text retrieval tasks. Our method consistently outperforms the existing baselines with a large margin, which demonstrates its superiority. Notably, we realize an average performance boost of 4.60% on image-classification downstream datasets for the class incremental split.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2023

Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models

Continual learning (CL) can help pre-trained vision-language models effi...
research
10/06/2022

CLIP model is an Efficient Continual Learner

The continual learning setting aims to learn new tasks over time without...
research
05/09/2023

Boosting Visual-Language Models by Exploiting Hard Samples

Large vision and language models, such as Contrastive Language-Image Pre...
research
07/19/2022

Don't Stop Learning: Towards Continual Learning for the CLIP Model

The Contrastive Language-Image Pre-training (CLIP) Model is a recently p...
research
11/17/2022

ConStruct-VL: Data-Free Continual Structured VL Concepts Learning

Recently, large-scale pre-trained Vision-and-Language (VL) foundation mo...
research
10/14/2022

Sequential Learning Of Neural Networks for Prequential MDL

Minimum Description Length (MDL) provides a framework and an objective f...
research
12/04/2021

VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts

Contrastive Vision-Language Pre-training (CLIP) has drown increasing att...

Please sign up or login with your details

Forgot password? Click here to reset