SuS-X: Training-Free Name-Only Transfer of Vision-Language Models

11/28/2022
by   Vishaal Udandarao, et al.
0

Contrastive Language-Image Pre-training (CLIP) has emerged as a simple yet effective way to train large-scale vision-language models. CLIP demonstrates impressive zero-shot classification and retrieval on diverse downstream tasks. However, to leverage its full potential, fine-tuning still appears to be necessary. Fine-tuning the entire CLIP model can be resource-intensive and unstable. Moreover, recent methods that aim to circumvent this need for fine-tuning still require access to images from the target distribution. In this paper, we pursue a different approach and explore the regime of training-free "name-only transfer" in which the only knowledge we possess about the downstream task comprises the names of downstream target categories. We propose a novel method, SuS-X, consisting of two key building blocks – SuS and TIP-X, that requires neither intensive fine-tuning nor costly labelled data. SuS-X achieves state-of-the-art zero-shot classification results on 19 benchmark datasets. We further show the utility of TIP-X in the training-free few-shot setting, where we again achieve state-of-the-art results over strong training-free baselines. Code is available at https://github.com/vishaal27/SuS-X.

READ FULL TEXT

page 8

page 18

research
07/19/2022

Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification

Contrastive Vision-Language Pre-training, known as CLIP, has provided a ...
research
06/09/2023

How Does Fine-Tuning Impact Out-of-Distribution Detection for Vision-Language Models?

Recent large vision-language models such as CLIP have shown remarkable o...
research
05/16/2022

FactPEGASUS: Factuality-Aware Pre-training and Fine-tuning for Abstractive Summarization

We present FactPEGASUS, an abstractive summarization model that addresse...
research
07/13/2023

Leveraging Vision-Language Foundation Models for Fine-Grained Downstream Tasks

Vision-language foundation models such as CLIP have shown impressive zer...
research
08/02/2023

Beyond Generic: Enhancing Image Captioning with Real-World Knowledge using Vision-Language Pre-Training Model

Current captioning approaches tend to generate correct but "generic" des...
research
11/06/2021

Tip-Adapter: Training-free CLIP-Adapter for Better Vision-Language Modeling

Contrastive Vision-Language Pre-training, known as CLIP, has provided a ...
research
09/22/2022

Efficient Few-Shot Learning Without Prompts

Recent few-shot methods, such as parameter-efficient fine-tuning (PEFT) ...

Please sign up or login with your details

Forgot password? Click here to reset