Universal Guidance for Diffusion Models

02/14/2023
by   Arpit Bansal, et al.
0

Typical diffusion models are trained to accept a particular form of conditioning, most commonly text, and cannot be conditioned on other modalities without retraining. In this work, we propose a universal guidance algorithm that enables diffusion models to be controlled by arbitrary guidance modalities without the need to retrain any use-specific components. We show that our algorithm successfully generates quality images with guidance functions including segmentation, face recognition, object detection, and classifier signals. Code is available at https://github.com/arpitbansal297/Universal-Guided-Diffusion.

READ FULL TEXT

page 1

page 5

page 6

page 7

page 12

page 13

page 14

page 15

research
09/27/2022

Draw Your Art Dream: Diverse Digital Art Synthesis with Multimodal Guided Diffusion

Digital art synthesis is receiving increasing attention in the multimedi...
research
11/21/2022

Investigating Prompt Engineering in Diffusion Models

With the spread of the use of Text2Img diffusion models such as DALL-E 2...
research
10/12/2022

Self-Guided Diffusion Models

Diffusion models have demonstrated remarkable progress in image generati...
research
05/11/2023

Null-text Guidance in Diffusion Models is Secretly a Cartoon-style Creator

Classifier-free guidance is an effective sampling technique in diffusion...
research
06/29/2023

Filtered-Guided Diffusion: Fast Filter Guidance for Black-Box Diffusion Models

Recent advances in diffusion-based generative models have shown incredib...
research
03/23/2023

MagicFusion: Boosting Text-to-Image Generation Performance by Fusing Diffusion Models

The advent of open-source AI communities has produced a cornucopia of po...
research
07/25/2023

Not with my name! Inferring artists' names of input strings employed by Diffusion Models

Diffusion Models (DM) are highly effective at generating realistic, high...

Please sign up or login with your details

Forgot password? Click here to reset