Prompt Tuning with Soft Context Sharing for Vision-Language Models

08/29/2022
by   Kun Ding, et al.
16

Vision-language models have recently shown great potential on many computer vision tasks. Meanwhile, prior work demonstrates prompt tuning designed for vision-language models could acquire superior performance on few-shot image recognition compared to linear probe, a strong baseline. In real-world applications, many few-shot tasks are correlated, particularly in a specialized area. However, such information is ignored by previous work. Inspired by the fact that modeling task relationships by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to fine-tune pre-trained vision-language models on multiple target few-shot tasks, simultaneously. Specifically, we design a task-shared meta network to generate prompt vector for each task using pre-defined task name together with a learnable meta prompt as input. As such, the prompt vectors of all tasks will be shared in a soft manner. The parameters of this shared meta network as well as the meta prompt vector are tuned on the joint training set of all target tasks. Extensive experiments on three multi-task few-shot datasets show that SoftCPT outperforms the representative single-task prompt tuning method CoOp [78] by a large margin, implying the effectiveness of multi-task learning in vision-language prompt tuning. The source code and data will be made publicly available.

READ FULL TEXT

page 8

page 13

page 14

page 15

page 18

research
11/21/2022

Multitask Vision-Language Prompt Tuning

Prompt Tuning, conditioning on task-specific learned prompt vectors, has...
research
03/29/2020

Meta Fine-Tuning Neural Language Models for Multi-Domain Text Mining

Pre-trained neural language models bring significant improvement for var...
research
08/22/2023

Unsupervised Prototype Adapter for Vision-Language Models

Recently, large-scale pre-trained vision-language models (e.g. CLIP and ...
research
05/24/2022

Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient Multi-task Knowledge Sharing

This work introduces ATTEMPT (Attentional Mixture of Prompt Tuning), a n...
research
10/19/2022

CPL: Counterfactual Prompt Learning for Vision and Language Models

Prompt tuning is a new few-shot transfer learning technique that only tu...
research
07/17/2018

A Modulation Module for Multi-task Learning with Applications in Image Retrieval

Multi-task learning has been widely adopted in many computer vision task...
research
08/16/2020

Bowtie Networks: Generative Modeling for Joint Few-Shot Recognition and Novel-View Synthesis

Generative modeling has recently shown great promise in computer vision,...

Please sign up or login with your details

Forgot password? Click here to reset