Self-Training Vision Language BERTs with a Unified Conditional Model

01/06/2022
by   Xiaofeng Yang, et al.
10

Natural language BERTs are trained with language corpus in a self-supervised manner. Unlike natural language BERTs, vision language BERTs need paired data to train, which restricts the scale of VL-BERT pretraining. We propose a self-training approach that allows training VL-BERTs from unlabeled image data. The proposed method starts with our unified conditional model – a vision language BERT model that can perform zero-shot conditional generation. Given different conditions, the unified conditional model can generate captions, dense captions, and even questions. We use the labeled image data to train a teacher model and use the trained model to generate pseudo captions on unlabeled image data. We then combine the labeled data and pseudo labeled data to train a student model. The process is iterated by putting the student model as a new teacher. By using the proposed self-training approach and only 300k unlabeled extra data, we are able to get competitive or even better performances compared to the models of similar model size trained with 3 million extra image data.

READ FULL TEXT

page 1

page 3

page 6

page 8

research
11/11/2019

Self-training with Noisy Student improves ImageNet classification

We present a simple self-training method that achieves 87.4 on ImageNet,...
research
03/25/2022

Pseudo-Label Transfer from Frame-Level to Note-Level in a Teacher-Student Framework for Singing Transcription from Polyphonic Music

Lack of large-scale note-level labeled data is the major obstacle to sin...
research
11/20/2020

SLADE: A Self-Training Framework For Distance Metric Learning

Most existing distance metric learning approaches use fully labeled data...
research
06/07/2019

Extracting Visual Knowledge from the Internet: Making Sense of Image Data

Recent successes in visual recognition can be primarily attributed to fe...
research
08/28/2021

Self-training Improves Pre-training for Few-shot Learning in Task-oriented Dialog Systems

As the labeling cost for different modules in task-oriented dialog (ToD)...
research
11/14/2022

On Unsupervised Uncertainty-Driven Speech Pseudo-Label Filtering and Model Calibration

Pseudo-label (PL) filtering forms a crucial part of Self-Training (ST) m...
research
04/02/2019

Lessons from Building Acoustic Models with a Million Hours of Speech

This is a report of our lessons learned building acoustic models from 1 ...

Please sign up or login with your details

Forgot password? Click here to reset