DreamArtist: Towards Controllable One-Shot Text-to-Image Generation via Contrastive Prompt-Tuning

11/21/2022
by   Ziyi Dong, et al.
0

Large-scale text-to-image generation models have achieved remarkable progress in synthesizing high-quality, feature-rich images with high resolution guided by texts. However, these models often struggle with novel concepts, eg, new styles, object entities, etc. Although recent attempts have employed fine-tuning or prompt-tuning strategies to teach the pre-trained diffusion model novel concepts from a reference image set,they have the drawback of overfitting to the given reference images, particularly in one-shot applications, which is harmful to generate diverse and high-quality images while maintaining generation controllability. To tackle this challenge, we present a simple yet effective method called DreamArtist, which employs a positive-negative prompt-tuning learning strategy. Specifically, DreamArtist incorporates both positive and negative embeddings and jointly trains them. The positive embedding aggressively captures the salient characteristics of the reference image to drive diversified generation and the negative embedding rectifies inadequacies from the positive embedding. It learns not only what is correct, but also what can be avoided or improved. We have conducted extensive experiments and evaluated the proposed method from image similarity and diversity, generation controllability, and style cloning. And our DreamArtist has achieved a superior generation performance over existing methods. Besides, our additional evaluation on extended tasks, including concept compositions and prompt-guided image editing, demonstrates its effectiveness for more applications.

READ FULL TEXT

page 1

page 4

page 5

page 6

page 7

page 8

research
06/25/2023

DomainStudio: Fine-Tuning Diffusion Models for Domain-Driven Image Generation using Limited Data

Denoising diffusion probabilistic models (DDPMs) have been proven capabl...
research
11/14/2022

Arbitrary Style Guidance for Enhanced Diffusion-Based Text-to-Image Generation

Diffusion-based text-to-image generation models like GLIDE and DALLE-2 h...
research
08/03/2023

DiffColor: Toward High Fidelity Text-Guided Image Colorization with Diffusion Models

Recent data-driven image colorization methods have enabled automatic or ...
research
09/13/2023

MagiCapture: High-Resolution Multi-Concept Portrait Customization

Large-scale text-to-image models including Stable Diffusion are capable ...
research
05/22/2023

The CLIP Model is Secretly an Image-to-Prompt Converter

The Stable Diffusion model is a prominent text-to-image generation model...
research
04/05/2023

Taming Encoder for Zero Fine-tuning Image Customization with Text-to-Image Diffusion Models

This paper proposes a method for generating images of customized objects...
research
06/16/2023

Energy-Based Cross Attention for Bayesian Context Update in Text-to-Image Diffusion Models

Despite the remarkable performance of text-to-image diffusion models in ...

Please sign up or login with your details

Forgot password? Click here to reset