Reducing Retraining by Recycling Parameter-Efficient Prompts

08/10/2022
by   Brian Lester, et al.
0

Parameter-efficient methods are able to use a single frozen pre-trained large language model (LLM) to perform many tasks by learning task-specific soft prompts that modulate model behavior when concatenated to the input text. However, these learned prompts are tightly coupled to a given frozen model – if the model is updated, corresponding new prompts need to be obtained. In this work, we propose and investigate several approaches to "Prompt Recycling'" where a prompt trained on a source model is transformed to work with the new target model. Our methods do not rely on supervised pairs of prompts, task-specific data, or training updates with the target model, which would be just as costly as re-tuning prompts with the target model from scratch. We show that recycling between models is possible (our best settings are able to successfully recycle 88.9% of prompts, producing a prompt that out-performs baselines), but significant performance headroom remains, requiring improved recycling techniques.

READ FULL TEXT
research
10/15/2021

SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer

As pre-trained language models have gotten larger, there has been growin...
research
05/22/2023

Learning Easily Updated General Purpose Text Representations with Adaptable Task-Specific Prefixes

Many real-world applications require making multiple predictions from th...
research
04/09/2022

IDPG: An Instance-Dependent Prompt Generation Method

Prompt tuning is a new, efficient NLP transfer learning paradigm that ad...
research
12/14/2020

Parameter-Efficient Transfer Learning with Diff Pruning

While task-specific finetuning of pretrained networks has led to signifi...
research
05/24/2022

Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient Multi-task Knowledge Sharing

This work introduces ATTEMPT (Attentional Mixture of Prompt Tuning), a n...
research
07/14/2022

Parameter-Efficient Prompt Tuning Makes Generalized and Calibrated Neural Text Retrievers

Prompt tuning attempts to update few task-specific parameters in pre-tra...
research
08/09/2022

A Boring-yet-effective Approach for the Product Ranking Task of the Amazon KDD Cup 2022

In this work we describe our submission to the product ranking task of t...

Please sign up or login with your details

Forgot password? Click here to reset