DeepAI AI Chat
Log In Sign Up

StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN

by   Min Jin Chong, et al.
University of Illinois at Urbana-Champaign
Snap Inc.

Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space. However, additional architectures or task-specific training paradigms are usually required for different tasks. In this work, we take a deeper look at the spatial properties of StyleGAN. We show that with a pretrained StyleGAN along with some operations, without any additional architecture, we can perform comparably to the state-of-the-art methods on various tasks, including image blending, panorama generation, generation from a single image, controllable and local multimodal image to image translation, and attributes transfer. The proposed method is simple, effective, efficient, and applicable to any existing pretrained StyleGAN model.


page 2

page 8

page 9

page 13

page 14

page 15

page 16

page 17


StyleGAN2 Distillation for Feed-forward Image Manipulation

StyleGAN2 is a state-of-the-art network in generating realistic images. ...

FEAT: Face Editing with Attention

Employing the latent space of pretrained generators has recently been sh...

SpaceEdit: Learning a Unified Editing Space for Open-Domain Image Editing

Recently, large pretrained models (e.g., BERT, StyleGAN, CLIP) have show...

StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation

We explore and analyze the latent style space of StyleGAN2, a state-of-t...

AE-StyleGAN: Improved Training of Style-Based Auto-Encoders

StyleGANs have shown impressive results on data generation and manipulat...

Disentangled Unsupervised Image Translation via Restricted Information Flow

Unsupervised image-to-image translation methods aim to map images from o...

Network Fusion for Content Creation with Conditional INNs

Artificial Intelligence for Content Creation has the potential to reduce...