Training-free Style Transfer Emerges from h-space in Diffusion models

03/27/2023
by   Jaeseok Jeong, et al.
0

Diffusion models (DMs) synthesize high-quality images in various domains. However, controlling their generative process is still hazy because the intermediate variables in the process are not rigorously studied. Recently, StyleCLIP-like editing of DMs is found in the bottleneck of the U-Net, named h-space. In this paper, we discover that DMs inherently have disentangled representations for content and style of the resulting images: h-space contains the content and the skip connections convey the style. Furthermore, we introduce a principled way to inject content of one image to another considering progressive nature of the generative process. Briefly, given the original generative process, 1) the feature of the source content should be gradually blended, 2) the blended feature should be normalized to preserve the distribution, 3) the change of skip connections due to content injection should be calibrated. Then, the resulting image has the source content with the style of the original image just like image-to-image translation. Interestingly, injecting contents to styles of unseen domains produces harmonization-like style transfer. To the best of our knowledge, our method introduces the first training-free feed-forward style transfer only with an unconditional pretrained frozen generative network. The code is available at https://curryjung.github.io/DiffStyle/.

READ FULL TEXT

page 7

page 18

page 20

page 21

page 22

page 23

page 24

page 25

research
08/05/2020

Domain-Specific Mappings for Generative Adversarial Style Transfer

Style transfer generates an image whose content comes from one image and...
research
07/06/2022

DCT-Net: Domain-Calibrated Translation for Portrait Stylization

This paper introduces DCT-Net, a novel image translation architecture fo...
research
04/11/2023

NeAT: Neural Artistic Tracing for Beautiful Style Transfer

Style transfer is the task of reproducing the semantic contents of a sou...
research
06/15/2023

ArtFusion: Controllable Arbitrary Style Transfer using Dual Conditional Latent Diffusion Models

Arbitrary Style Transfer (AST) aims to transform images by adopting the ...
research
09/02/2020

Neural Crossbreed: Neural Based Image Metamorphosis

We propose Neural Crossbreed, a feed-forward neural network that can lea...
research
03/15/2023

Class-Guided Image-to-Image Diffusion: Cell Painting from Brightfield Images with Class Labels

Image-to-image reconstruction problems with free or inexpensive metadata...

Please sign up or login with your details

Forgot password? Click here to reset