General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning

12/02/2022
by   Shih-Cheng Huang, et al.
0

Parameter-efficient methods (like Prompt or Adapters) for adapting pre-trained language models to downstream tasks have been popular recently. However, hindrances still prevent these methods from reaching their full potential. For example, two significant challenges are few-shot adaptation and cross-task generalization ability. To tackle these issues, we propose a general framework to enhance the few-shot adaptation and cross-domain generalization ability of parameter-efficient methods. In our framework, we prime the self-supervised model for parameter-efficient methods to rapidly adapt to various downstream few-shot tasks. To evaluate the authentic generalization ability of these parameter-efficient methods, we conduct experiments on a few-shot cross-domain benchmark containing 160 diverse NLP tasks. The experiment result reveals that priming by tuning PLM only with extra training tasks leads to the best performance. Also, we perform a comprehensive analysis of various parameter-efficient methods under few-shot cross-domain scenarios.

READ FULL TEXT
research
04/19/2023

AdapterGNN: Efficient Delta Tuning Improves Generalization Ability in Graph Neural Networks

Fine-tuning pre-trained models has recently yielded remarkable performan...
research
10/25/2022

Evaluating Parameter Efficient Learning for Generation

Parameter efficient learning methods (PERMs) have recently gained signif...
research
07/15/2023

SINC: Self-Supervised In-Context Learning for Vision-Language Tasks

Large Pre-trained Transformers exhibit an intriguing capacity for in-con...
research
03/22/2023

Meta-augmented Prompt Tuning for Better Few-shot Learning

Prompt tuning is a parameter-efficient method, which freezes all PLM par...
research
09/17/2023

MVP: Meta Visual Prompt Tuning for Few-Shot Remote Sensing Image Scene Classification

Vision Transformer (ViT) models have recently emerged as powerful and ve...
research
05/14/2023

Learning to Generalize for Cross-domain QA

There have been growing concerns regarding the out-of-domain generalizat...
research
03/12/2023

Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models

Prompt tuning, a recently emerging paradigm, enables the powerful vision...

Please sign up or login with your details

Forgot password? Click here to reset