Regularized Mask Tuning: Uncovering Hidden Knowledge in Pre-trained Vision-Language Models

07/27/2023
by   Kecheng Zheng, et al.
0

Prompt tuning and adapter tuning have shown great potential in transferring pre-trained vision-language models (VLMs) to various downstream tasks. In this work, we design a new type of tuning method, termed as regularized mask tuning, which masks the network parameters through a learnable selection. Inspired by neural pathways, we argue that the knowledge required by a downstream task already exists in the pre-trained weights but just gets concealed in the upstream pre-training stage. To bring the useful knowledge back into light, we first identify a set of parameters that are important to a given downstream task, then attach a binary mask to each parameter, and finally optimize these masks on the downstream data with the parameters frozen. When updating the mask, we introduce a novel gradient dropout strategy to regularize the parameter selection, in order to prevent the model from forgetting old knowledge and overfitting the downstream data. Experimental results on 11 datasets demonstrate the consistent superiority of our method over previous alternatives. It is noteworthy that we manage to deliver 18.73 improvement compared to the zero-shot CLIP via masking an average of only 2.56 parameters. Furthermore, our method is synergistic with most existing parameter-efficient tuning methods and can boost the performance on top of them. Project page can be found here (https://wuw2019.github.io/RMT/).

READ FULL TEXT

page 4

page 18

research
04/24/2022

Learning to Win Lottery Tickets in BERT Transfer via Task-agnostic Mask Training

Recent studies on the lottery ticket hypothesis (LTH) show that pre-trai...
research
03/24/2023

Prompt Tuning based Adapter for Vision-Language Model Adaption

Large pre-trained vision-language (VL) models have shown significant pro...
research
11/06/2022

Robust Lottery Tickets for Pre-trained Language Models

Recent works on Lottery Ticket Hypothesis have shown that pre-trained la...
research
05/28/2023

Stochastic Bridges as Effective Regularizers for Parameter-Efficient Tuning

Parameter-efficient tuning methods (PETs) have achieved promising result...
research
06/15/2023

LOVM: Language-Only Vision Model Selection

Pre-trained multi-modal vision-language models (VLMs) are becoming incre...
research
05/28/2023

One Network, Many Masks: Towards More Parameter-Efficient Transfer Learning

Fine-tuning pre-trained language models for multiple tasks tends to be e...
research
05/20/2022

Mask-guided Vision Transformer (MG-ViT) for Few-Shot Learning

Learning with little data is challenging but often inevitable in various...

Please sign up or login with your details

Forgot password? Click here to reset