π-Tuning: Transferring Multimodal Foundation Models with Optimal Multi-task Interpolation

04/27/2023
by   Chengyue Wu, et al.
12

Foundation models have achieved great advances in multi-task learning with a unified interface of unimodal and multimodal tasks. However, the potential of such multi-task learners has not been exploited during transfer learning. In this work, we present a universal parameter-efficient transfer learning method, termed Predict-Interpolate Tuning (π-Tuning), for vision, language, and vision-language tasks. It aggregates the parameters of lightweight task-specific experts learned from similar tasks to aid the target downstream task. The task similarities are predicted in a unified modality-independent space, yielding a scalable graph to demonstrate task relationships. π-Tuning has several appealing benefits. First, it flexibly explores both intra- and inter-modal transferability between similar tasks to improve the accuracy and robustness of transfer learning, especially in data-scarce scenarios. Second, it offers a systematical solution for transfer learning with multi-task prediction-and-then-interpolation, compatible with diverse types of parameter-efficient experts, such as prompt and adapter. Third, an extensive study of task-level mutual benefits on 14 unimodal and 6 multimodal datasets shows that π-Tuning surpasses fine-tuning and other parameter-efficient transfer learning methods both in full-shot and low-shot regimes. The task graph also enables an in-depth interpretable analysis of task transferability across modalities.

READ FULL TEXT
research
10/25/2018

K For The Price Of 1: Parameter Efficient Multi-task And Transfer Learning

We introduce a novel method that enables parameter-efficient transfer an...
research
10/31/2022

Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5

We compare sequential fine-tuning with a model for multi-task learning i...
research
05/24/2022

Attentional Mixtures of Soft Prompt Tuning for Parameter-efficient Multi-task Knowledge Sharing

This work introduces ATTEMPT (Attentional Mixture of Prompt Tuning), a n...
research
03/01/2023

Rethinking Efficient Tuning Methods from a Unified Perspective

Parameter-efficient transfer learning (PETL) based on large-scale pre-tr...
research
04/23/2018

Dropping Networks for Transfer Learning

In natural language understanding, many challenges require learning rela...
research
10/13/2022

Unified Vision and Language Prompt Learning

Prompt tuning, a parameter- and data-efficient transfer learning paradig...
research
02/25/2021

Hyperparameter Transfer Learning with Adaptive Complexity

Bayesian optimization (BO) is a sample efficient approach to automatical...

Please sign up or login with your details

Forgot password? Click here to reset