eDiff-I: Text-to-Image Diffusion Models with an Ensemble of Expert Denoisers

11/02/2022
by   Yogesh Balaji, et al.
0

Large-scale diffusion-based generative models have led to breakthroughs in text-conditioned high-resolution image synthesis. Starting from random noise, such text-to-image diffusion models gradually synthesize images in an iterative fashion while conditioning on text prompts. We find that their synthesis behavior qualitatively changes throughout this process: Early in sampling, generation strongly relies on the text prompt to generate text-aligned content, while later, the text conditioning is almost entirely ignored. This suggests that sharing model parameters throughout the entire generation process may not be ideal. Therefore, in contrast to existing works, we propose to train an ensemble of text-to-image diffusion models specialized for different synthesis stages. To maintain training efficiency, we initially train a single model, which is then split into specialized models that are trained for the specific stages of the iterative generation process. Our ensemble of diffusion models, called eDiff-I, results in improved text alignment while maintaining the same inference computation cost and preserving high visual quality, outperforming previous large-scale text-to-image diffusion models on the standard benchmark. In addition, we train our model to exploit a variety of embeddings for conditioning, including the T5 text, CLIP text, and CLIP image embeddings. We show that these different embeddings lead to different behaviors. Notably, the CLIP image embedding allows an intuitive way of transferring the style of a reference image to the target text-to-image output. Lastly, we show a technique that enables eDiff-I's "paint-with-words" capability. A user can select the word in the input text and paint it in a canvas to control the output, which is very handy for crafting the desired image in mind. The project page is available at https://deepimagination.cc/eDiff-I/

READ FULL TEXT

page 1

page 11

page 12

page 13

page 15

page 16

page 17

page 19

research
02/05/2023

Mixture of Diffusers for scene composition and high resolution image generation

Diffusion methods have been proven to be very effective to generate imag...
research
09/15/2023

Breathing New Life into 3D Assets with Generative Repainting

Diffusion-based text-to-image models ignited immense attention from the ...
research
12/22/2022

Scalable Adaptive Computation for Iterative Generation

We present the Recurrent Interface Network (RIN), a neural net architect...
research
05/07/2023

Text-to-Image Diffusion Models can be Easily Backdoored through Multimodal Data Poisoning

With the help of conditioning mechanisms, the state-of-the-art diffusion...
research
05/25/2023

Prompt-Free Diffusion: Taking "Text" out of Text-to-Image Diffusion Models

Text-to-image (T2I) research has grown explosively in the past year, owi...
research
01/21/2023

MTTN: Multi-Pair Text to Text Narratives for Prompt Generation

The increased interest in diffusion models has opened up opportunities f...
research
06/14/2023

Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation

Text-to-Image (T2I) generation with diffusion models allows users to con...

Please sign up or login with your details

Forgot password? Click here to reset