Bridging the Gap between Label- and Reference-based Synthesis in Multi-attribute Image-to-Image Translation

10/11/2021
by   Qiusheng Huang, et al.
0

The image-to-image translation (I2IT) model takes a target label or a reference image as the input, and changes a source into the specified target domain. The two types of synthesis, either label- or reference-based, have substantial differences. Particularly, the label-based synthesis reflects the common characteristics of the target domain, and the reference-based shows the specific style similar to the reference. This paper intends to bridge the gap between them in the task of multi-attribute I2IT. We design the label- and reference-based encoding modules (LEM and REM) to compare the domain differences. They first transfer the source image and target label (or reference) into a common embedding space, by providing the opposite directions through the attribute difference vector. Then the two embeddings are simply fused together to form the latent code S_rand (or S_ref), reflecting the domain style differences, which is injected into each layer of the generator by SPADE. To link LEM and REM, so that two types of results benefit each other, we encourage the two latent codes to be close, and set up the cycle consistency between the forward and backward translations on them. Moreover, the interpolation between the S_rand and S_ref is also used to synthesize an extra image. Experiments show that label- and reference-based synthesis are indeed mutually promoted, so that we can have the diverse results from LEM, and high quality results with the similar style of the reference.

READ FULL TEXT

page 1

page 3

page 6

page 7

page 8

research
10/11/2021

LSC-GAN: Latent Style Code Modeling for Continuous Image-to-image Translation

Image-to-image (I2I) translation is usually carried out among discrete d...
research
06/26/2023

Progressive Energy-Based Cooperative Learning for Multi-Domain Image-to-Image Translation

This paper studies a novel energy-based cooperative learning framework f...
research
02/25/2019

TraVeLGAN: Image-to-image Translation by Transformation Vector Learning

Interest in image-to-image translation has grown substantially in recent...
research
10/12/2020

Unsupervised Image-to-Image Translation via Pre-trained StyleGAN2 Network

Image-to-Image (I2I) translation is a heated topic in academia, and it a...
research
10/03/2022

Smooth image-to-image translations with latent space interpolations

Multi-domain image-to-image (I2I) translations can transform a source im...
research
12/10/2018

SMIT: Stochastic Multi-Label Image-to-Image Translation

Cross-domain mapping has been a very active topic in recent years. Given...
research
04/09/2019

Adversarial Learning of Disentangled and Generalizable Representations for Visual Attributes

Recently, a multitude of methods for image-to-image translation has demo...

Please sign up or login with your details

Forgot password? Click here to reset