Break-A-Scene: Extracting Multiple Concepts from a Single Image

05/25/2023
by   Omri Avrahami, et al.
0

Text-to-image model personalization aims to introduce a user-provided concept to the model, allowing its synthesis in diverse contexts. However, current methods primarily focus on the case of learning a single concept from multiple images with variations in backgrounds and poses, and struggle when adapted to a different scenario. In this work, we introduce the task of textual scene decomposition: given a single image of a scene that may contain several concepts, we aim to extract a distinct text token for each concept, enabling fine-grained control over the generated scenes. To this end, we propose augmenting the input image with masks that indicate the presence of target concepts. These masks can be provided by the user or generated automatically by a pre-trained segmentation model. We then present a novel two-phase customization process that optimizes a set of dedicated textual embeddings (handles), as well as the model weights, striking a delicate balance between accurately capturing the concepts and avoiding overfitting. We employ a masked diffusion loss to enable handles to generate their assigned concepts, complemented by a novel loss on cross-attention maps to prevent entanglement. We also introduce union-sampling, a training strategy aimed to improve the ability of combining multiple concepts in generated images. We use several automatic metrics to quantitatively compare our method against several baselines, and further affirm the results using a user study. Finally, we showcase several applications of our method. Project page is available at: https://omriavrahami.com/break-a-scene/

READ FULL TEXT

page 8

page 9

page 11

page 12

page 13

page 16

page 17

page 18

research
02/23/2023

Encoder-based Domain Tuning for Fast Personalization of Text-to-Image Models

Text-to-image personalization aims to teach a pre-trained diffusion mode...
research
06/01/2023

The Hidden Language of Diffusion Models

Text-to-image diffusion models have demonstrated an unparalleled ability...
research
05/02/2023

Key-Locked Rank One Editing for Text-to-Image Personalization

Text-to-image models (T2I) offer a new level of flexibility by allowing ...
research
06/22/2023

Continuous Layout Editing of Single Images with Diffusion Models

Recent advancements in large-scale text-to-image diffusion models have e...
research
12/08/2022

Multi-Concept Customization of Text-to-Image Diffusion

While generative models produce high-quality images of concepts learned ...
research
03/07/2023

ELODIN: Naming Concepts in Embedding Spaces

Despite recent advancements, the field of text-to-image synthesis still ...
research
11/04/2021

Unsupervised Learning of Compositional Energy Concepts

Humans are able to rapidly understand scenes by utilizing concepts extra...

Please sign up or login with your details

Forgot password? Click here to reset