Parameter and Computation Efficient Transfer Learning for Vision-Language Pre-trained Models

09/04/2023
by   Qiong Wu, et al.
0

With ever increasing parameters and computation, vision-language pre-trained (VLP) models exhibit prohibitive expenditure in downstream task adaption. Recent endeavors mainly focus on parameter efficient transfer learning (PETL) for VLP models by only updating a small number of parameters. However, excessive computational overhead still plagues the application of VLPs. In this paper, we aim at parameter and computation efficient transfer learning (PCETL) for VLP models. In particular, PCETL not only needs to limit the number of trainable parameters in VLP models, but also to reduce the computational redundancy during inference, thus enabling a more efficient transfer. To approach this target, we propose a novel dynamic architecture skipping (DAS) approach towards effective PCETL. Instead of directly optimizing the intrinsic architectures of VLP models, DAS first observes the significances of their modules to downstream tasks via a reinforcement learning (RL) based process, and then skips the redundant ones with lightweight networks, i.e., adapters, according to the obtained rewards. In this case, the VLP model can well maintain the scale of trainable parameters while speeding up its inference on downstream tasks. To validate DAS, we apply it to two representative VLP models, namely ViLT and METER, and conduct extensive experiments on a bunch of VL tasks. The experimental results not only show the great advantages of DAS in reducing computational complexity, e.g. -11.97 also confirm its competitiveness against existing PETL methods in terms of parameter scale and performance. Our source code is given in our appendix.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/27/2023

Approximated Prompt Tuning for Vision-Language Pre-trained Models

Prompt tuning is a parameter-efficient way to deploy large-scale pre-tra...
research
02/16/2023

Towards Efficient Visual Adaption via Structural Re-parameterization

Parameter-efficient transfer learning (PETL) is an emerging research spo...
research
04/11/2023

Conditional Adapters: Parameter-efficient Transfer Learning with Fast Inference

We propose Conditional Adapter (CoDA), a parameter-efficient transfer le...
research
02/22/2023

Modular Deep Learning

Transfer learning has recently become the dominant paradigm of machine l...
research
02/26/2023

Scalable Weight Reparametrization for Efficient Transfer Learning

This paper proposes a novel, efficient transfer learning method, called ...
research
04/30/2022

AdapterBias: Parameter-efficient Token-dependent Representation Shift for Adapters in NLP Tasks

Transformer-based pre-trained models with millions of parameters require...
research
07/14/2023

MGit: A Model Versioning and Management System

Models derived from other models are extremely common in machine learnin...

Please sign up or login with your details

Forgot password? Click here to reset