Exploring Efficient Few-shot Adaptation for Vision Transformers

01/06/2023
by   Chengming Xu, et al.
0

The task of Few-shot Learning (FSL) aims to do the inference on novel categories containing only few labeled examples, with the help of knowledge learned from base categories containing abundant labeled training samples. While there are numerous works into FSL task, Vision Transformers (ViTs) have rarely been taken as the backbone to FSL with few trials focusing on naive finetuning of whole backbone or classification layer. Essentially, despite ViTs have been shown to enjoy comparable or even better performance on other vision tasks, it is still very nontrivial to efficiently finetune the ViTs in real-world FSL scenarios. To this end, we propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the FSL tasks. The key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA) for the task and backbone tuning, individually. Specifically, in APT, the prefix is projected to new key and value pairs that are attached to each self-attention layer to provide the model with task-specific information. Moreover, we design the DRA in the form of learnable offset vectors to handle the potential domain gaps between base and novel data. To ensure the APT would not deviate from the initial task-specific information much, we further propose a novel prototypical regularization, which maximizes the similarity between the projected distribution of prefix and initial prototypes, regularizing the update procedure. Our method receives outstanding performance on the challenging Meta-Dataset. We conduct extensive experiments to show the efficacy of our model.

READ FULL TEXT

page 11

page 19

page 20

page 21

page 22

research
11/30/2021

AdaViT: Adaptive Vision Transformers for Efficient Image Recognition

Built on top of self-attention mechanisms, vision transformers have demo...
research
08/23/2023

Vision Transformer Adapters for Generalizable Multitask Learning

We introduce the first multitasking vision transformer adapters that lea...
research
12/27/2021

Few-Shot Classification in Unseen Domains by Episodic Meta-Learning Across Visual Domains

Few-shot classification aims to carry out classification given only few ...
research
10/20/2021

Contextual Gradient Scaling for Few-Shot Learning

Model-agnostic meta-learning (MAML) is a well-known optimization-based m...
research
10/05/2019

Transductive Episodic-Wise Adaptive Metric for Few-Shot Learning

Few-shot learning, which aims at extracting new concepts rapidly from ex...
research
04/25/2023

Hint-Aug: Drawing Hints from Foundation Vision Transformers Towards Boosted Few-Shot Parameter-Efficient Tuning

Despite the growing demand for tuning foundation vision transformers (FV...
research
09/21/2021

Multi-Domain Few-Shot Learning and Dataset for Agricultural Applications

Automatic classification of pests and plants (both healthy and diseased)...

Please sign up or login with your details

Forgot password? Click here to reset