SCT: A Simple Baseline for Parameter-Efficient Fine-Tuning via Salient Channels

09/15/2023
by   Henry Hengyuan Zhao, et al.
0

Pre-trained vision transformers have strong representation benefits to various downstream tasks. Recently, many parameter-efficient fine-tuning (PEFT) methods have been proposed, and their experiments demonstrate that tuning only 1 scenarios. However, these methods overlook the task-specific information when fine-tuning diverse downstream tasks. In this paper, we propose a simple yet effective method called "Salient Channel Tuning" (SCT) to leverage the task-specific information by forwarding the model with the task images to select partial channels in a feature map that enables us to tune only 1/8 channels leading to significantly lower parameter costs. Experiments outperform full fine-tuning on 18 out of 19 tasks in the VTAB-1K benchmark by adding only 0.11M parameters of the ViT-B, which is 780× fewer than its full fine-tuning counterpart. Furthermore, experiments on domain generalization and few-shot learning surpass other PEFT methods with lower parameter costs, demonstrating our proposed tuning technique's strong capability and effectiveness in the low-data regime.

READ FULL TEXT

page 3

page 10

page 13

page 15

research
07/28/2022

Pro-tuning: Unified Prompt Tuning for Vision Tasks

In computer vision, fine-tuning is the de-facto approach to leverage pre...
research
05/11/2022

Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than In-Context Learning

Few-shot in-context learning (ICL) enables pre-trained language models t...
research
05/26/2023

Parameter-Efficient Fine-Tuning without Introducing New Latency

Parameter-efficient fine-tuning (PEFT) of pre-trained language models ha...
research
12/06/2022

FacT: Factor-Tuning for Lightweight Adaptation on Vision Transformer

Recent work has explored the potential to adapt a pre-trained vision tra...
research
08/30/2023

MerA: Merging Pretrained Adapters For Few-Shot Learning

Adapter tuning, which updates only a few parameters, has become a mainst...
research
08/07/2021

NASOA: Towards Faster Task-oriented Online Fine-tuning with a Zoo of Models

Fine-tuning from pre-trained ImageNet models has been a simple, effectiv...
research
07/14/2022

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

Prompt tuning attempts to update few task-specific parameters in pre-tra...

Please sign up or login with your details

Forgot password? Click here to reset