TunaGAN: Interpretable GAN for Smart Editing

08/16/2019
by   Weiquan Mao, et al.
0

In this paper, we introduce a tunable generative adversary network (TunaGAN) that uses an auxiliary network on top of existing generator networks (Style-GAN) to modify high-resolution face images according to user's high-level instructions, with good qualitative and quantitative performance. To optimize for feature disentanglement, we also investigate two different latent space that could be traversed for modification. The problem of mode collapse is characterized in detail for model robustness. This work could be easily extended to content-aware image editor based on other GANs and provide insight on mode collapse problems in more general settings.

READ FULL TEXT

page 4

page 6

research
12/08/2021

InvGAN: Invertible GANs

Generation of photo-realistic images, semantic editing and representatio...
research
11/22/2022

S^2-Flow: Joint Semantic and Style Editing of Facial Images

The high-quality images yielded by generative adversarial networks (GANs...
research
03/06/2022

Semantic-Aware Latent Space Exploration for Face Image Restoration

For image restoration, most existing deep learning based methods tend to...
research
10/28/2022

Latent Space is Feature Space: Regularization Term for GANs Training on Limited Dataset

Generative Adversarial Networks (GAN) is currently widely used as an uns...
research
03/30/2023

LatentForensics: Towards lighter deepfake detection in the StyleGAN latent space

The classification of forged videos has been a challenge for the past fe...
research
07/30/2023

StylePrompter: All Styles Need Is Attention

GAN inversion aims at inverting given images into corresponding latent c...
research
11/19/2020

Style Intervention: How to Achieve Spatial Disentanglement with Style-based Generators?

Generative Adversarial Networks (GANs) with style-based generators (e.g....

Please sign up or login with your details

Forgot password? Click here to reset