Versatile Diffusion: Text, Images and Variations All in One Diffusion Model

11/15/2022
by   Xingqian Xu, et al.
0

The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs. Our code and models are open-sourced at https://github.com/SHI-Labs/Versatile-Diffusion.

READ FULL TEXT

page 1

page 6

page 7

page 8

research
09/25/2022

Personalizing Text-to-Image Generation via Aesthetic Gradients

This work proposes aesthetic gradients, a method to personalize a CLIP-c...
research
05/01/2023

In-Context Learning Unlocked for Diffusion Models

We present Prompt Diffusion, a framework for enabling in-context learnin...
research
06/01/2023

FigGen: Text to Scientific Figure Generation

The generative modeling landscape has experienced tremendous growth in r...
research
05/22/2023

If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based Text-to-Image Generation by Selection

Despite their impressive capabilities, diffusion-based text-to-image (T2...
research
06/14/2023

GBSD: Generative Bokeh with Stage Diffusion

The bokeh effect is an artistic technique that blurs out-of-focus areas ...
research
09/12/2023

InstaFlow: One Step is Enough for High-Quality Diffusion-Based Text-to-Image Generation

Diffusion models have revolutionized text-to-image generation with its e...
research
07/17/2023

Not All Steps are Created Equal: Selective Diffusion Distillation for Image Manipulation

Conditional diffusion models have demonstrated impressive performance in...

Please sign up or login with your details

Forgot password? Click here to reset