Prompt Tuning for Generative Multimodal Pretrained Models

08/04/2022
by   Hao Yang, et al.
0

Prompt tuning has become a new paradigm for model tuning and it has demonstrated success in natural language pretraining and even vision pretraining. In this work, we explore the transfer of prompt tuning to multimodal pretraining, with a focus on generative multimodal pretrained models, instead of contrastive ones. Specifically, we implement prompt tuning on the unified sequence-to-sequence pretrained model adaptive to both understanding and generation tasks. Experimental results demonstrate that the light-weight prompt tuning can achieve comparable performance with finetuning and surpass other light-weight tuning methods. Besides, in comparison with finetuned models, the prompt-tuned models demonstrate improved robustness against adversarial attacks. We further figure out that experimental factors, including the prompt length, prompt depth, and reparameteratization, have great impacts on the model performance, and thus we empirically provide a recommendation for the setups of prompt tuning. Despite the observed advantages, we still find some limitations in prompt tuning, and we correspondingly point out the directions for future studies. Codes are available at <https://github.com/OFA-Sys/OFA>

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2021

s2s-ft: Fine-Tuning Pretrained Transformer Encoders for Sequence-to-Sequence Learning

Pretrained bidirectional Transformers, such as BERT, have achieved signi...
research
12/19/2022

Transferring General Multimodal Pretrained Models to Text Recognition

This paper proposes a new method, OFA-OCR, to transfer multimodal pretra...
research
03/13/2023

Robust Contrastive Language-Image Pretraining against Adversarial Attacks

Contrastive vision-language representation learning has achieved state-o...
research
10/30/2022

Parameter-Efficient Tuning Makes a Good Classification Head

In recent years, pretrained models revolutionized the paradigm of natura...
research
02/07/2022

Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

In this work, we pursue a unified paradigm for multimodal pretraining to...
research
04/06/2022

Fusing finetuned models for better pretraining

Pretrained models are the standard starting point for training. This app...
research
05/30/2023

Universality and Limitations of Prompt Tuning

Despite the demonstrated empirical efficacy of prompt tuning to adapt a ...

Please sign up or login with your details

Forgot password? Click here to reset