Know Where You're Going: Meta-Learning for Parameter-Efficient Fine-tuning

05/25/2022
by   Mozhdeh Gheini, et al.
0

A recent family of techniques, dubbed as lightweight fine-tuning methods, facilitates parameter-efficient transfer learning by updating only a small set of additional parameters while keeping the parameters of the pretrained language model frozen. While proven to be an effective method, there are no existing studies on if and how such knowledge of the downstream fine-tuning approach should affect the pretraining stage. In this work, we show that taking the ultimate choice of fine-tuning method into consideration boosts the performance of parameter-efficient fine-tuning. By relying on optimization-based meta-learning using MAML with certain modifications for our distinct purpose, we prime the pretrained model specifically for parameter-efficient fine-tuning, resulting in gains of up to 1.7 points on cross-lingual NER fine-tuning. Our ablation settings and analyses further reveal that the tweaks we introduce in MAML are crucial for the attained gains.

READ FULL TEXT
research
06/06/2021

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

Adapter-based tuning has recently arisen as an alternative to fine-tunin...
research
03/23/2022

Visual Prompt Tuning

The current modus operandi in adapting pre-trained models involves updat...
research
05/05/2021

How Fine-Tuning Allows for Effective Meta-Learning

Representation learning has been widely studied in the context of meta-l...
research
02/23/2023

An efficient method for Out-of-Distribution Detection

Detecting out-of-distribution (OOD) data is critical to building reliabl...
research
10/09/2022

SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters

Adapter Tuning, which freezes the pretrained language models (PLMs) and ...
research
03/23/2023

Parameter-Efficient Sparse Retrievers and Rerankers using Adapters

Parameter-Efficient transfer learning with Adapters have been studied in...
research
05/26/2023

HUB: Guiding Learned Optimizers with Continuous Prompt Tuning

Learned optimizers are a crucial component of meta-learning. Recent adva...

Please sign up or login with your details

Forgot password? Click here to reset