High-Fidelity Guided Image Synthesis with Latent Diffusion Models

11/30/2022
by   Jaskirat Singh, et al.
0

Controllable image synthesis with user scribbles has gained huge public interest with the recent advent of text-conditioned latent diffusion models. The user scribbles control the color composition while the text prompt provides control over the overall image semantics. However, we note that prior works in this direction suffer from an intrinsic domain shift problem, wherein the generated outputs often lack details and resemble simplistic representations of the target domain. In this paper, we propose a novel guided image synthesis framework, which addresses this problem by modeling the output image as the solution of a constrained optimization problem. We show that while computing an exact solution to the optimization is infeasible, an approximation of the same can be achieved while just requiring a single pass of the reverse diffusion process. Additionally, we show that by simply defining a cross-attention based correspondence between the input text tokens and the user stroke-painting, the user is also able to control the semantics of different painted regions without requiring any conditional training or finetuning. Human user study results show that the proposed approach outperforms the previous state-of-the-art by over 85.32 available at https://1jsingh.github.io/gradop.

READ FULL TEXT

page 1

page 6

page 8

page 12

page 13

page 14

page 17

page 18

research
07/24/2023

TF-ICON: Diffusion-Based Training-Free Cross-Domain Image Composition

Text-driven diffusion models have exhibited impressive generative capabi...
research
12/12/2022

The Stable Artist: Steering Semantics in Diffusion Latent Space

Large, text-conditioned generative diffusion models have recently gained...
research
09/27/2022

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

Digital art synthesis is receiving increasing attention in the multimedi...
research
08/03/2023

DiffColor: Toward High Fidelity Text-Guided Image Colorization with Diffusion Models

Recent data-driven image colorization methods have enabled automatic or ...
research
07/20/2023

BoxDiff: Text-to-Image Synthesis with Training-Free Box-Constrained Diffusion

Recent text-to-image diffusion models have demonstrated an astonishing c...
research
09/23/2022

MAGIC: Mask-Guided Image Synthesis by Inverting a Quasi-Robust Classifier

We offer a method for one-shot image synthesis that allows controlling m...
research
01/28/2023

SEGA: Instructing Diffusion using Semantic Dimensions

Text-to-image diffusion models have recently received a lot of interest ...

Please sign up or login with your details

Forgot password? Click here to reset