Do We Really Need a Large Number of Visual Prompts?

05/26/2023
by   Youngeun Kim, et al.
0

Due to increasing interest in adapting models on resource-constrained edges, parameter-efficient transfer learning has been widely explored. Among various methods, Visual Prompt Tuning (VPT), prepending learnable prompts to input space, shows competitive fine-tuning performance compared to training of full network parameters. However, VPT increases the number of input tokens, resulting in additional computational overhead. In this paper, we analyze the impact of the number of prompts on fine-tuning performance and self-attention operation in a vision transformer architecture. Through theoretical and empirical analysis we show that adding more prompts does not lead to linear performance improvement. Further, we propose a Prompt Condensation (PC) technique that aims to prevent performance degradation from using a small number of prompts. We validate our methods on FGVC and VTAB-1k tasks and show that our approach reduces the number of prompts by  70 accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/25/2023

E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning

As the size of transformer-based models continues to grow, fine-tuning t...
research
03/29/2022

Fine-tuning Image Transformers using Learnable Memory

In this paper we propose augmenting Vision Transformer models with learn...
research
09/11/2023

DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning

Prompt tuning (PT), where a small amount of trainable soft (continuous) ...
research
10/03/2022

Towards a Unified View on Visual Parameter-Efficient Transfer Learning

Since the release of various large-scale natural language processing (NL...
research
05/30/2023

Prompt-based Tuning of Transformer Models for Multi-Center Medical Image Segmentation

Medical image segmentation is a vital healthcare endeavor requiring prec...
research
03/31/2023

A Closer Look at Parameter-Efficient Tuning in Diffusion Models

Large-scale diffusion models like Stable Diffusion are powerful and find...
research
08/11/2023

Experts Weights Averaging: A New General Training Scheme for Vision Transformers

Structural re-parameterization is a general training scheme for Convolut...

Please sign up or login with your details

Forgot password? Click here to reset