SINE: SINgle Image Editing with Text-to-Image Diffusion Models

12/08/2022
by   Zhixing Zhang, et al.
0

Recent works on diffusion models have demonstrated a strong capability for conditioning image generation, e.g., text-guided image synthesis. Such success inspires many efforts trying to use large-scale pre-trained diffusion models for tackling a challenging problem–real image editing. Works conducted in this area learn a unique textual token corresponding to several images containing the same object. However, under many circumstances, only one image is available, such as the painting of the Girl with a Pearl Earring. Using existing works on fine-tuning the pre-trained diffusion models with a single image causes severe overfitting issues. The information leakage from the pre-trained diffusion models makes editing can not keep the same content as the given image while creating new features depicted by the language guidance. This work aims to address the problem of single-image editing. We propose a novel model-based guidance built upon the classifier-free guidance so that the knowledge from the model trained on a single image can be distilled into the pre-trained diffusion model, enabling content creation even with one given image. Additionally, we propose a patch-based fine-tuning that can effectively help the model generate images of arbitrary resolution. We provide extensive experiments to validate the design choices of our approach and show promising editing capabilities, including changing style, content addition, and object manipulation. The code is available for research purposes at https://github.com/zhang-zx/SINE.git .

READ FULL TEXT

page 5

page 13

page 14

page 15

page 16

page 17

page 18

page 19

research
07/05/2023

DragonDiffusion: Enabling Drag-style Manipulation on Diffusion Models

Despite the ability of existing large-scale text-to-image (T2I) models t...
research
11/25/2022

3DDesigner: Towards Photorealistic 3D Object Generation and Editing with Text-guided Diffusion Models

Text-guided diffusion models have shown superior performance in image/vi...
research
05/25/2023

ProSpect: Expanded Conditioning for the Personalization of Attribute-aware Image Generation

Personalizing generative models offers a way to guide image generation w...
research
06/22/2023

Continuous Layout Editing of Single Images with Diffusion Models

Recent advancements in large-scale text-to-image diffusion models have e...
research
11/29/2022

SinDDM: A Single Image Denoising Diffusion Model

Denoising diffusion models (DDMs) have led to staggering performance lea...
research
08/25/2023

Unified Concept Editing in Diffusion Models

Text-to-image models suffer from various safety issues that may limit th...
research
03/19/2023

SKED: Sketch-guided Text-based 3D Editing

Text-to-image diffusion models are gradually introduced into computer gr...

Please sign up or login with your details

Forgot password? Click here to reset