Late-Constraint Diffusion Guidance for Controllable Image Synthesis

05/19/2023
by   Chang Liu, et al.
0

Diffusion models, either with or without text condition, have demonstrated impressive capability in synthesizing photorealistic images given a few or even no words. These models may not fully satisfy user need, as normal users or artists intend to control the synthesized images with specific guidance, like overall layout, color, structure, object shape, and so on. To adapt diffusion models for controllable image synthesis, several methods have been proposed to incorporate the required conditions as regularization upon the intermediate features of the diffusion denoising network. These methods, known as early-constraint ones in this paper, have difficulties in handling multiple conditions with a single solution. They intend to train separate models for each specific condition, which require much training cost and result in non-generalizable solutions. To address these difficulties, we propose a new approach namely late-constraint: we leave the diffusion networks unchanged, but constrain its output to be aligned with the required conditions. Specifically, we train a lightweight condition adapter to establish the correlation between external conditions and internal representations of diffusion models. During the iterative denoising process, the conditional guidance is sent into corresponding condition adapter to manipulate the sampling process with the established correlation. We further equip the introduced late-constraint strategy with a timestep resampling method and an early stopping technique, which boost the quality of synthesized image meanwhile complying with the guidance. Our method outperforms the existing early-constraint methods and generalizes better to unseen condition.

READ FULL TEXT

page 2

page 5

page 7

page 8

page 9

research
02/05/2023

ShiftDDPMs: Exploring Conditional Diffusion Models by Shifting Diffusion Trajectories

Diffusion models have recently exhibited remarkable abilities to synthes...
research
02/20/2023

Composer: Creative and Controllable Image Synthesis with Composable Conditions

Recent large-scale generative models learned on big data are capable of ...
research
06/14/2023

Diffusion in Diffusion: Cyclic One-Way Diffusion for Text-Vision-Conditioned Generation

Text-to-Image (T2I) generation with diffusion models allows users to con...
research
06/26/2023

Decompose and Realign: Tackling Condition Misalignment in Text-to-Image Diffusion Models

Text-to-image diffusion models have advanced towards more controllable g...
research
07/20/2023

BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion

Recent text-to-image diffusion models have demonstrated an astonishing c...
research
09/19/2023

PGDiff: Guiding Diffusion Models for Versatile Face Restoration via Partial Guidance

Exploiting pre-trained diffusion models for restoration has recently bec...
research
04/25/2023

Contrastive Energy Prediction for Exact Energy-Guided Diffusion Sampling in Offline Reinforcement Learning

Guided sampling is a vital approach for applying diffusion models in rea...

Please sign up or login with your details

Forgot password? Click here to reset