Compositional Visual Generation with Composable Diffusion Models

06/03/2022
by   Nan Liu, et al.
8

Large text-guided diffusion models, such as DALLE-2, are able to generate stunning photorealistic images given natural language descriptions. While such models are highly flexible, they struggle to understand the composition of certain concepts, such as confusing the attributes of different objects or relations between objects. In this paper, we propose an alternative structured approach for compositional generation using diffusion models. An image is generated by composing a set of diffusion models, with each of them modeling a certain component of the image. To do this, we interpret diffusion models as energy-based models in which the data distributions defined by the energy functions may be explicitly combined. The proposed method can generate scenes at test time that are substantially more complex than those seen in training, composing sentence descriptions, object relations, human facial attributes, and even generalizing to new combinations that are rarely seen in the real world. We further illustrate how our approach may be used to compose pre-trained text-guided diffusion models and generate photorealistic images containing all the details described in the input descriptions, including the binding of certain object attributes that have been shown difficult for DALLE-2. These results point to the effectiveness of the proposed method in promoting structured generalization for visual generation. Project page: https://energy-based-model.github.io/Compositional-Visual-Generation-with-Composable-Diffusion-Models/

READ FULL TEXT

page 8

page 12

page 18

page 19

page 20

page 21

page 22

page 23

research
02/22/2023

Reduce, Reuse, Recycle: Compositional Generation with Energy-Based Diffusion Models and MCMC

Since their introduction, diffusion models have quickly become the preva...
research
04/13/2020

Compositional Visual Generation and Inference with Energy Based Models

A vital aspect of human intelligence is the ability to compose increasin...
research
02/25/2023

Directed Diffusion: Direct Control of Object Placement through Attention Guidance

Text-guided diffusion models such as DALLE-2, IMAGEN, and Stable Diffusi...
research
06/07/2023

ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image Diffusion Models

The ability to understand visual concepts and replicate and compose thes...
research
03/24/2023

CompoNeRF: Text-guided Multi-object Compositional NeRF with Editable 3D Scene Layout

Recent research endeavors have shown that combining neural radiance fiel...
research
06/21/2023

TauPETGen: Text-Conditional Tau PET Image Synthesis Based on Latent Diffusion Models

In this work, we developed a novel text-guided image synthesis technique...
research
11/17/2021

Learning to Compose Visual Relations

The visual world around us can be described as a structured set of objec...

Please sign up or login with your details

Forgot password? Click here to reset