FastComposer: Tuning-Free Multi-Subject Image Generation with Localized Attention

05/17/2023
by   Guangxuan Xiao, et al.
0

Diffusion models excel at text-to-image generation, especially in subject-driven generation for personalized images. However, existing methods are inefficient due to the subject-specific fine-tuning, which is computationally intensive and hampers efficient deployment. Moreover, existing methods struggle with multi-subject generation as they often blend features among subjects. We present FastComposer which enables efficient, personalized, multi-subject text-to-image generation without fine-tuning. FastComposer uses subject embeddings extracted by an image encoder to augment the generic text conditioning in diffusion models, enabling personalized image generation based on subject images and textual instructions with only forward passes. To address the identity blending problem in the multi-subject generation, FastComposer proposes cross-attention localization supervision during training, enforcing the attention of reference subjects localized to the correct regions in the target images. Naively conditioning on subject embeddings results in subject overfitting. FastComposer proposes delayed subject conditioning in the denoising step to maintain both identity and editability in subject-driven image generation. FastComposer generates images of multiple unseen individuals with different styles, actions, and contexts. It achieves 300×-2500× speedup compared to fine-tuning-based methods and requires zero extra storage for new subjects. FastComposer paves the way for efficient, personalized, and high-quality multi-subject image creation. Code, model, and dataset are available at https://github.com/mit-han-lab/fastcomposer.

READ FULL TEXT

page 2

page 4

page 5

page 6

page 7

page 9

research
05/24/2023

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

Subject-driven text-to-image generation models create novel renditions o...
research
03/20/2023

SVDiff: Compact Parameter Space for Diffusion Fine-Tuning

Diffusion models have achieved remarkable success in text-to-image gener...
research
04/14/2023

Identity Encoder for Personalized Diffusion

Many applications can benefit from personalized image generation models,...
research
05/05/2023

DisenBooth: Disentangled Parameter-Efficient Tuning for Subject-Driven Text-to-Image Generation

Given a small set of images of a specific subject, subject-driven text-t...
research
04/01/2023

Subject-driven Text-to-Image Generation via Apprenticeship Learning

Recent text-to-image generation models like DreamBooth have made remarka...
research
09/11/2023

PhotoVerse: Tuning-Free Image Customization with Text-to-Image Diffusion Models

Personalized text-to-image generation has emerged as a powerful and soug...
research
07/13/2023

HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models

Personalization has emerged as a prominent aspect within the field of ge...

Please sign up or login with your details

Forgot password? Click here to reset