Learning Domain Invariant Prompt for Vision-Language Models

12/08/2022
by   Cairong Zhao, et al.
0

Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates domain invariant prompt generalizable to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for inputs from both image and text modalities. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the prompt tuned on a specific domain or class also to achieve good performance on another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and four datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.

READ FULL TEXT
research
03/10/2022

Conditional Prompt Learning for Vision-Language Models

With the rise of powerful pre-trained vision-language models like CLIP, ...
research
10/13/2022

Unified Vision and Language Prompt Learning

Prompt tuning, a parameter- and data-efficient transfer learning paradig...
research
10/03/2022

Language-Aware Soft Prompting for Vision Language Foundation Models

This paper is on soft prompt learning for Vision & Language (V L) mode...
research
02/26/2022

Semantic Supervision: Enabling Generalization over Output Spaces

In this paper, we propose Semantic Supervision (SemSup) - a unified para...
research
05/22/2023

Single Domain Dynamic Generalization for Iris Presentation Attack Detection

Iris presentation attack detection (PAD) has achieved great success unde...
research
06/13/2022

INDIGO: Intrinsic Multimodality for Domain Generalization

For models to generalize under unseen domains (a.k.a domain generalizati...
research
08/22/2023

Knowledge-Aware Prompt Tuning for Generalizable Vision-Language Models

Pre-trained vision-language models, e.g., CLIP, working with manually de...

Please sign up or login with your details

Forgot password? Click here to reset