Learning Expressive Prompting With Residuals for Vision Transformers

03/27/2023
by   Rajshekhar Das, et al.
0

Prompt learning is an efficient approach to adapt transformers by inserting learnable set of parameters into the input and intermediate representations of a pre-trained model. In this work, we present Expressive Prompts with Residuals (EXPRES) which modifies the prompt learning paradigm specifically for effective adaptation of vision transformers (ViT). Out method constructs downstream representations via learnable “output” tokens, that are akin to the learned class tokens of the ViT. Further for better steering of the downstream representation processed by the frozen transformer, we introduce residual learnable tokens that are added to the output of various computations. We apply EXPRES for image classification, few shot learning, and semantic segmentation, and show our method is capable of achieving state of the art prompt tuning on 3/3 categories of the VTAB benchmark. In addition to strong performance, we observe that our approach is an order of magnitude more prompt efficient than existing visual prompting baselines. We analytically show the computational benefits of our approach over weight space adaptation techniques like finetuning. Lastly we systematically corroborate the architectural design of our method via a series of ablation experiments.

READ FULL TEXT

page 1

page 3

page 13

page 16

research
12/15/2022

Full Contextual Attention for Multi-resolution Transformers in Semantic Segmentation

Transformers have proved to be very effective for visual recognition tas...
research
06/08/2023

Improving Visual Prompt Tuning for Self-supervised Vision Transformers

Visual Prompt Tuning (VPT) is an effective tuning method for adapting pr...
research
10/03/2022

Visual Prompt Tuning for Generative Transfer Learning

Transferring knowledge from an image synthesis model trained on a large ...
research
03/17/2022

Attribute Surrogates Learning and Spectral Tokens Pooling in Transformers for Few-shot Learning

This paper presents new hierarchically cascaded transformers that can im...
research
01/12/2023

ViTs for SITS: Vision Transformers for Satellite Image Time Series

In this paper we introduce the Temporo-Spatial Vision Transformer (TSViT...
research
06/23/2021

Probabilistic Attention for Interactive Segmentation

We provide a probabilistic interpretation of attention and show that the...
research
07/14/2022

Convolutional Bypasses Are Better Vision Transformer Adapters

The pretrain-then-finetune paradigm has been widely adopted in computer ...

Please sign up or login with your details

Forgot password? Click here to reset