SgVA-CLIP: Semantic-guided Visual Adapting of Vision-Language Models for Few-shot Image Classification

11/28/2022
by   Fang Peng, et al.
0

Although significant progress has been made in few-shot learning, most of existing few-shot image classification methods require supervised pre-training on a large amount of samples of base classes, which limits their generalization ability in real world application. Recently, large-scale Vision-Language Pre-trained models (VLPs) have been gaining increasing attention in few-shot learning because they can provide a new paradigm for transferable visual representation learning with easily available text on the Web. However, the VLPs may neglect detailed visual information that is difficult to describe by language sentences, but important for learning an effective classifier to distinguish different images. To address the above problem, we propose a new framework, named Semantic-guided Visual Adapting (SgVA), which can effectively extend vision-language pre-trained models to produce discriminative adapted visual features by comprehensively using an implicit knowledge distillation, a vision-specific contrastive loss, and a cross-modal contrastive loss. The implicit knowledge distillation is designed to transfer the fine-grained cross-modal knowledge to guide the updating of the vision adapter. State-of-the-art results on 13 datasets demonstrate that the adapted visual features can well complement the cross-modal features to improve few-shot image classification.

READ FULL TEXT

page 1

page 4

research
01/26/2023

Improving Cross-modal Alignment for Text-Guided Image Inpainting

Text-guided image inpainting (TGII) aims to restore missing regions base...
research
05/19/2023

Few-Shot Learning with Visual Distribution Calibration and Cross-Modal Distribution Alignment

Pre-trained vision-language models have inspired much research on few-sh...
research
11/22/2022

On the Transferability of Visual Features in Generalized Zero-Shot Learning

Generalized Zero-Shot Learning (GZSL) aims to train a classifier that ca...
research
02/23/2023

Learning Visual Representations via Language-Guided Sampling

Although an object may appear in numerous contexts, we often describe it...
research
12/04/2021

VT-CLIP: Enhancing Vision-Language Models with Visual-guided Texts

Contrastive Vision-Language Pre-training (CLIP) has drown increasing att...
research
09/16/2023

Delving into Multimodal Prompting for Fine-grained Visual Classification

Fine-grained visual classification (FGVC) involves categorizing fine sub...
research
02/19/2019

Adaptive Cross-Modal Few-Shot Learning

Metric-based meta-learning techniques have successfully been applied to ...

Please sign up or login with your details

Forgot password? Click here to reset