Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient Multi-task Knowledge Sharing

05/24/2022
by   Akari Asai, et al.
13

This work introduces ATTEMPT (Attentional Mixture of Prompt Tuning), a new modular, multi-task, and parameter-efficient language model (LM) tuning approach that combines knowledge transferred across different tasks via a mixture of soft prompts while keeping original LM unchanged. ATTEMPT interpolates a set of prompts trained on large-scale source tasks and a newly initialized target task prompt using instance-wise attention computed by a lightweight sub-network trained on multiple target tasks. ATTEMPT is parameter-efficient (e.g., updates 1,600 times fewer parameters than fine-tuning) and enables multi-task learning and flexible extensions; importantly, it is also more interpretable because it demonstrates which source tasks affect the final model decision on target tasks. Experimental results across 17 diverse datasets show that ATTEMPT improves prompt tuning by up to a 22 other parameter-efficient tuning approaches that use over ten times more parameters.

READ FULL TEXT

page 4

page 10

10/25/2018

K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning

We introduce a novel method that enables parameter-efficient transfer an...
06/08/2021

Parameter-efficient Multi-task Fine-tuning for Transformers via Shared Hypernetworks

State-of-the-art parameter-efficient fine-tuning methods rely on introdu...
08/29/2022

Prompt Tuning with Soft Context Sharing for Vision-Language Models

Vision-language models have recently shown great potential on many compu...
01/26/2021

Muppet: Massive Multi-task Representations with Pre-Finetuning

We propose pre-finetuning, an additional large-scale learning stage betw...
05/24/2022

Structured Prompt Tuning

We propose structured prompt tuning, a simple and effective method to im...
08/10/2022

Reducing Retraining by Recycling Parameter-Efficient Prompts

Parameter-efficient methods are able to use a single frozen pre-trained ...
05/28/2021

Efficient and robust multi-task learning in the brain with modular task primitives

In a real-world setting biological agents do not have infinite resources...