Improved StyleGAN Embedding: Where are the Good Latents?

by   Peihao Zhu, et al.

StyleGAN is able to produce photorealistic images that are almost indistinguishable from real ones. The reverse problem of finding an embedding for a given image poses a challenge. Embeddings that reconstruct an image well are not always robust to editing operations. In this paper, we address the problem of finding an embedding that both reconstructs images and also supports image editing tasks. First, we introduce a new normalized space to analyze the diversity and the quality of the reconstructed latent codes. This space can help answer the question of where good latent codes are located in latent space. Second, we propose an improved embedding algorithm using a novel regularization method based on our analysis. Finally, we analyze the quality of different embedding algorithms. We compare our results with the current state-of-the-art methods and achieve a better trade-off between reconstruction quality and editing quality.


page 3

page 4

page 5

page 12

page 13

page 14

page 15

page 17


Image2StyleGAN++: How to Edit the Embedded Images?

We propose Image2StyleGAN++, a flexible image editing framework with man...

Image2StyleGAN: How to Embed Images Into the StyleGAN Latent Space?

We propose an efficient algorithm to embed a given image into the latent...

Transforming the Latent Space of StyleGAN for Real Face Editing

Despite recent advances in semantic manipulation using StyleGAN, semanti...

FlexIT: Towards Flexible Semantic Image Translation

Deep generative models, like GANs, have considerably improved the state ...

Using latent space regression to analyze and leverage compositionality in GANs

In recent years, Generative Adversarial Networks have become ubiquitous ...

Towards Counterfactual Image Manipulation via CLIP

Leveraging StyleGAN's expressivity and its disentangled latent codes, ex...

FREDE: Linear-Space Anytime Graph Embeddings

Low-dimensional representations, or embeddings, of a graph's nodes facil...