Approximated Prompt Tuning for Vision-Language Pre-trained Models

06/27/2023
by   Qiong Wu, et al.
0

Prompt tuning is a parameter-efficient way to deploy large-scale pre-trained models to downstream tasks by adding task-specific tokens. In terms of vision-language pre-trained (VLP) models, prompt tuning often requires a large number of learnable tokens to bridge the gap between the pre-training and downstream tasks, which greatly exacerbates the already high computational overhead. In this paper, we revisit the principle of prompt tuning for Transformer-based VLP models and reveal that the impact of soft prompt tokens can be actually approximated via independent information diffusion steps, thereby avoiding the expensive global attention modeling and reducing the computational complexity to a large extent. Based on this finding, we propose a novel Approximated Prompt Tuning (APT) approach towards efficient VL transfer learning. To validate APT, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of downstream tasks. Meanwhile, the generalization of APT is also validated on CLIP for image classification. The experimental results not only show the superior performance gains and computation efficiency of APT against the conventional prompt tuning methods, e.g., +6.6 METER, but also confirm its merits over other parameter-efficient transfer learning approaches.

READ FULL TEXT

page 2

page 8

research
09/04/2023

Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models

With ever increasing parameters and computation, vision-language pre-tra...
research
09/12/2023

Dynamic Visual Prompt Tuning for Parameter Efficient Transfer Learning

Parameter efficient transfer learning (PETL) is an emerging research spo...
research
03/01/2023

Rethinking Efficient Tuning Methods from a Unified Perspective

Parameter-efficient transfer learning (PETL) based on large-scale pre-tr...
research
04/30/2022

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Transformer-based pre-trained models with millions of parameters require...
research
10/10/2022

XPrompt: Exploring the Extreme of Prompt Tuning

Prompt tuning learns soft prompts to condition frozen Pre-trained Langua...
research
06/01/2023

Adapting Pre-trained Language Models to Vision-Language Tasks via Dynamic Visual Prompting

Pre-trained language models (PLMs) have played an increasing role in mul...
research
05/23/2022

Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding

Prompt Tuning (PT) has been largely successful as a parameter-efficient ...

Please sign up or login with your details

Forgot password? Click here to reset