RECLIP: Resource-efficient CLIP by Training with Small Images

04/12/2023
by   Runze Li, et al.
0

We present RECLIP (Resource-efficient CLIP), a simple method that minimizes computational resource footprint for CLIP (Contrastive Language Image Pretraining). Inspired by the notion of coarse-to-fine in computer vision, we leverage small images to learn from large-scale language supervision efficiently, and finetune the model with high-resolution data in the end. Since the complexity of the vision transformer heavily depends on input image size, our approach significantly reduces the training resource requirements both in theory and in practice. Using the same batch size and training epoch, RECLIP achieves highly competitive zero-shot classification and image text retrieval accuracy with 6 to 8× less computational resources and 7 to 9× fewer FLOPs than the baseline. Compared to the state-of-the-art contrastive learning methods, RECLIP demonstrates 5 to 59× training resource savings while maintaining highly competitive zero-shot classification and retrieval performance. We hope this work will pave the path for the broader research community to explore language supervised pretraining in more resource-friendly settings.

READ FULL TEXT

page 3

page 11

page 12

research
10/02/2020

Long-Tail Zero and Few-Shot Learning via Contrastive Pretraining on and for Small Data

For natural language processing (NLP) tasks such as sentiment or topic c...
research
11/30/2021

An implementation of the "Guess who?" game using CLIP

CLIP (Contrastive Language-Image Pretraining) is an efficient method for...
research
09/02/2023

Contrastive Feature Masking Open-Vocabulary Vision Transformer

We present Contrastive Feature Masking Vision Transformer (CFM-ViT) - an...
research
11/12/2022

Large-scale Contrastive Language-Audio Pretraining with Feature Fusion and Keyword-to-Caption Augmentation

Contrastive learning has shown remarkable success in the field of multim...
research
10/11/2022

Contrastive Training Improves Zero-Shot Classification of Semi-structured Documents

We investigate semi-structured document classification in a zero-shot se...
research
10/12/2022

S4ND: Modeling Images and Videos as Multidimensional Signals Using State Spaces

Visual data such as images and videos are typically modeled as discretiz...
research
06/01/2023

Wuerstchen: Efficient Pretraining of Text-to-Image Models

We introduce Wuerstchen, a novel technique for text-to-image synthesis t...

Please sign up or login with your details

Forgot password? Click here to reset