Prompt Tuning Inversion for Text-Driven Image Editing Using Diffusion Models

05/08/2023
by   Wenkai Dong, et al.
0

Recently large-scale language-image models (e.g., text-guided diffusion models) have considerably improved the image generation capabilities to generate photorealistic images in various domains. Based on this success, current image editing methods use texts to achieve intuitive and versatile modification of images. To edit a real image using diffusion models, one must first invert the image to a noisy latent from which an edited image is sampled with a target text prompt. However, most methods lack one of the following: user-friendliness (e.g., additional masks or precise descriptions of the input image are required), generalization to larger domains, or high fidelity to the input image. In this paper, we design an accurate and quick inversion technique, Prompt Tuning Inversion, for text-driven image editing. Specifically, our proposed editing method consists of a reconstruction stage and an editing stage. In the first stage, we encode the information of the input image into a learnable conditional embedding via Prompt Tuning Inversion. In the second stage, we apply classifier-free guidance to sample the edited image, where the conditional embedding is calculated by linearly interpolating between the target embedding and the optimized one obtained in the first stage. This technique ensures a superior trade-off between editability and high fidelity to the input image of our method. For example, we can change the color of a specific object while preserving its original shape and background under the guidance of only a target text prompt. Extensive experiments on ImageNet demonstrate the superior editing performance of our method compared to the state-of-the-art baselines.

READ FULL TEXT

page 1

page 4

page 5

page 7

page 8

research
10/17/2022

Imagic: Text-Based Real Image Editing with Diffusion Models

Text-conditioned image editing has recently attracted considerable inter...
research
07/02/2023

LEDITS: Real Image Editing with DDPM Inversion and Semantic Guidance

Recent large-scale text-guided diffusion models provide powerful image-g...
research
03/08/2023

Video-P2P: Video Editing with Cross-attention Control

This paper presents Video-P2P, a novel framework for real-world video ed...
research
10/17/2022

UniTune: Text-Driven Image Editing by Fine Tuning an Image Generation Model on a Single Image

We present UniTune, a simple and novel method for general text-driven im...
research
05/27/2023

Text-to-image Editing by Image Information Removal

Diffusion models have demonstrated impressive performance in text-guided...
research
11/22/2022

EDICT: Exact Diffusion Inversion via Coupled Transformations

Finding an initial noise vector that produces an input image when fed in...
research
05/10/2023

iEdit: Localised Text-guided Image Editing with Weak Supervision

Diffusion models (DMs) can generate realistic images with text guidance ...

Please sign up or login with your details

Forgot password? Click here to reset