Improving Inversion and Generation Diversity in StyleGAN using a Gaussianized Latent Space

09/14/2020
by   Jonas Wulff, et al.
0

Modern Generative Adversarial Networks are capable of creating artificial, photorealistic images from latent vectors living in a low-dimensional learned latent space. It has been shown that a wide range of images can be projected into this space, including images outside of the domain that the generator was trained on. However, while in this case the generator reproduces the pixels and textures of the images, the reconstructed latent vectors are unstable and small perturbations result in significant image distortions. In this work, we propose to explicitly model the data distribution in latent space. We show that, under a simple nonlinear operation, the data distribution can be modeled as Gaussian and therefore expressed using sufficient statistics. This yields a simple Gaussian prior, which we use to regularize the projection of images into the latent space. The resulting projections lie in smoother and better behaved regions of the latent space, as shown using interpolation performance for both real and generated images. Furthermore, the Gaussian model of the distribution in latent space allows us to investigate the origins of artifacts in the generator output, and provides a method for reducing these artifacts while maintaining diversity of the generated images.

READ FULL TEXT

page 7

page 8

page 9

page 13

page 14

page 15

page 16

research
10/09/2018

Generalized Latent Variable Recovery for Generative Adversarial Networks

The Generator of a Generative Adversarial Network (GAN) is trained to tr...
research
11/07/2021

NeurInt : Learning to Interpolate through Neural ODEs

A wide range of applications require learning image generation models wh...
research
06/15/2020

Learning Latent Space Energy-Based Prior Model

The generator model assumes that the observed example is generated by a ...
research
04/26/2023

Effect of latent space distribution on the segmentation of images with multiple annotations

We propose the Generalized Probabilistic U-Net, which extends the Probab...
research
02/04/2019

A Forest from the Trees: Generation through Neighborhoods

In this work, we propose to learn a generative model using both learned ...
research
09/10/2020

Self-Supervised Annotation of Seismic Images using Latent Space Factorization

Annotating seismic data is expensive, laborious and subjective due to th...
research
07/03/2017

Learning to Avoid Errors in GANs by Manipulating Input Spaces

Despite recent advances, large scale visual artifacts are still a common...

Please sign up or login with your details

Forgot password? Click here to reset