Meta-augmented Prompt Tuning for Better Few-shot Learning

03/22/2023
by   Kaihang Pan, et al.
0

Prompt tuning is a parameter-efficient method, which freezes all PLM parameters and only prepends some additional tunable tokens called soft prompts to the input text. However, soft prompts heavily rely on a better initialization and may easily result in overfitting under few-shot settings, which causes prompt-tuning performing much worse than fine-tuning. To address the above issues, this paper proposes a novel Self-sUpervised Meta-prompt learning framework with MEtagradient Regularization for few shot generalization (SUMMER). We leverage self-supervised meta-learning to better initialize soft prompts and curriculum-based task augmentation is further proposed to enrich the meta-task distribution. Besides, a novel meta-gradient regularization method is integrated into the meta-prompt learning framework, which meta-learns to transform the raw gradient during few-shot learning into a domain-generalizable direction, thus alleviating the problem of overfitting. Extensive experiments show that SUMMER achieves better performance for different few-shot downstream tasks, and also exhibits a stronger domain generalization ability.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/12/2023

Gradient-Regulated Meta-Prompt Learning for Generalizable Vision-Language Models

Prompt tuning, a recently emerging paradigm, enables the powerful vision...
research
11/02/2021

Diverse Distributions of Self-Supervised Tasks for Meta-Learning in NLP

Meta-learning considers the problem of learning an efficient learning pr...
research
09/23/2022

MetaPrompting: Learning to Learn Better Prompts

Prompting method is regarded as one of the crucial progress for few-shot...
research
04/13/2023

Out-of-distribution Few-shot Learning For Edge Devices without Model Fine-tuning

Few-shot learning (FSL) via customization of a deep learning network wit...
research
12/02/2022

General Framework for Self-Supervised Model Priming for Parameter-Efficient Fine-tuning

Parameter-efficient methods (like Prompt or Adapters) for adapting pre-t...
research
03/28/2022

A Framework of Meta Functional Learning for Regularising Knowledge Transfer

Machine learning classifiers' capability is largely dependent on the sca...
research
06/14/2023

Improving Generalization in Meta-Learning via Meta-Gradient Augmentation

Meta-learning methods typically follow a two-loop framework, where each ...

Please sign up or login with your details

Forgot password? Click here to reset