StEP: Style-based Encoder Pre-training for Multi-modal Image Synthesis

04/14/2021
by   Moustafa Meshry, et al.
0

We propose a novel approach for multi-modal Image-to-image (I2I) translation. To tackle the one-to-many relationship between input and output domains, previous works use complex training objectives to learn a latent embedding, jointly with the generator, that models the variability of the output domain. In contrast, we directly model the style variability of images, independent of the image synthesis task. Specifically, we pre-train a generic style encoder using a novel proxy task to learn an embedding of images, from arbitrary domains, into a low-dimensional style latent space. The learned latent space introduces several advantages over previous traditional approaches to multi-modal I2I translation. First, it is not dependent on the target dataset, and generalizes well across multiple domains. Second, it learns a more powerful and expressive latent space, which improves the fidelity of style capture and transfer. The proposed style pre-training also simplifies the training objective and speeds up the training significantly. Furthermore, we provide a detailed study of the contribution of different loss terms to the task of multi-modal I2I translation, and propose a simple alternative to VAEs to enable sampling from unconstrained latent spaces. Finally, we achieve state-of-the-art results on six challenging benchmarks with a simple training objective that includes only a GAN loss and a reconstruction loss.

READ FULL TEXT

page 4

page 5

page 6

page 12

page 13

page 15

page 16

research
08/03/2020

Encoding in Style: a StyleGAN Encoder for Image-to-Image Translation

We present a generic image-to-image translation framework, Pixel2Style2P...
research
09/26/2021

ISF-GAN: An Implicit Style Function for High-Resolution Image-to-Image Translation

Recently, there has been an increasing interest in image editing methods...
research
10/09/2021

Embed Everything: A Method for Efficiently Co-Embedding Multi-Modal Spaces

Any general artificial intelligence system must be able to interpret, op...
research
07/20/2022

Controllable and Guided Face Synthesis for Unconstrained Face Recognition

Although significant advances have been made in face recognition (FR), F...
research
07/30/2021

Perceiver IO: A General Architecture for Structured Inputs Outputs

The recently-proposed Perceiver model obtains good results on several do...
research
10/13/2021

Harnessing the Conditioning Sensorium for Improved Image Translation

Multi-modal domain translation typically refers to synthesizing a novel ...
research
01/10/2020

Unsupervised multi-modal Styled Content Generation

The emergence of deep generative models has recently enabled the automat...

Please sign up or login with your details

Forgot password? Click here to reset