A Closer Look at Parameter-Efficient Tuning in Diffusion Models

03/31/2023
by   Chendong Xiang, et al.
0

Large-scale diffusion models like Stable Diffusion are powerful and find various real-world applications while customizing such models by fine-tuning is both memory and time inefficient. Motivated by the recent progress in natural language processing, we investigate parameter-efficient tuning in large diffusion models by inserting small learnable modules (termed adapters). In particular, we decompose the design space of adapters into orthogonal factors – the input position, the output position as well as the function form, and perform Analysis of Variance (ANOVA), a classical statistical approach for analyzing the correlation between discrete (design options) and continuous variables (evaluation metrics). Our analysis suggests that the input position of adapters is the critical factor influencing the performance of downstream tasks. Then, we carefully study the choice of the input position, and we find that putting the input position after the cross-attention block can lead to the best performance, validated by additional visualization analyses. Finally, we provide a recipe for parameter-efficient tuning in diffusion models, which is comparable if not superior to the fully fine-tuned baseline (e.g., DreamBooth) with only 0.75 % extra parameters, across various customized tasks.

READ FULL TEXT

page 1

page 2

page 4

page 5

page 6

page 7

page 8

page 14

research
08/15/2022

Conv-Adapter: Exploring Parameter Efficient Transfer Learning for ConvNets

While parameter efficient tuning (PET) methods have shown great potentia...
research
03/07/2022

Input-Tuning: Adapting Unfamiliar Inputs to Frozen Pretrained Models

Recently the prompt-tuning paradigm has attracted significant attention....
research
04/13/2023

DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning

Diffusion models have proven to be highly effective in generating high-q...
research
09/13/2023

Scaled Prompt-Tuning for Few-Shot Natural Language Generation

The increasingly Large Language Models (LLMs) demonstrate stronger langu...
research
05/20/2023

Prefix Propagation: Parameter-Efficient Tuning for Long Sequences

Parameter-efficient tuning aims to mitigate the large memory requirement...
research
11/07/2022

Multi-Head Adapter Routing for Data-Efficient Fine-Tuning

Parameter-efficient fine-tuning (PEFT) methods can adapt large language ...
research
05/26/2023

Do We Really Need a Large Number of Visual Prompts?

Due to increasing interest in adapting models on resource-constrained ed...

Please sign up or login with your details

Forgot password? Click here to reset