StyleDrop: Text-to-Image Generation in Any Style

06/01/2023
by   Kihyuk Sohn, et al.
1

Pre-trained large text-to-image models synthesize impressive images with an appropriate use of text prompts. However, ambiguities inherent in natural language and out-of-distribution effects make it hard to synthesize image styles, that leverage a specific design pattern, texture or material. In this paper, we introduce StyleDrop, a method that enables the synthesis of images that faithfully follow a specific style using a text-to-image model. The proposed method is extremely versatile and captures nuances and details of a user-provided style, such as color schemes, shading, design patterns, and local and global effects. It efficiently learns a new style by fine-tuning very few trainable parameters (less than 1% of total model parameters) and improving the quality via iterative training with either human or automated feedback. Better yet, StyleDrop is able to deliver impressive results even when the user supplies only a single image that specifies the desired style. An extensive study shows that, for the task of style tuning text-to-image models, StyleDrop implemented on Muse convincingly outperforms other methods, including DreamBooth and textual inversion on Imagen or Stable Diffusion. More results are available at our project website: https://styledrop.github.io

READ FULL TEXT

page 19

page 20

page 21

page 22

page 23

page 24

page 26

page 28

research
11/23/2022

Inversion-Based Style Transfer with Diffusion Models

The artistic style within a painting is the means of expression, which i...
research
02/16/2023

MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation

Recent advances in text-to-image generation with diffusion models presen...
research
02/08/2023

GLAZE: Protecting Artists from Style Mimicry by Text-to-Image Models

Recent text-to-image diffusion models such as MidJourney and Stable Diff...
research
04/19/2019

Fashion++: Minimal Edits for Outfit Improvement

Given an outfit, what small changes would most improve its fashionabilit...
research
06/27/2023

Free-style and Fast 3D Portrait Synthesis

Efficiently generating a free-style 3D portrait with high quality and co...
research
06/14/2023

GBSD: Generative Bokeh with Stage Diffusion

The bokeh effect is an artistic technique that blurs out-of-focus areas ...
research
07/13/2023

HyperDreamBooth: HyperNetworks for Fast Personalization of Text-to-Image Models

Personalization has emerged as a prominent aspect within the field of ge...

Please sign up or login with your details

Forgot password? Click here to reset