Image Processing Using Multi-Code GAN Prior
Despite the success of Generative Adversarial Networks (GANs) in image synthesis, applying trained GAN models to real image processing remains challenging. Because the generator in GANs typically maps the latent space to the image space, there leaves no space for it to take a real image as the input. To make a trained GAN handle real images, existing methods attempt to invert a target image back to the latent space either by back-propagation or by learning an additional encoder. However, the reconstructions from both of the methods are far from ideal. In this work, we propose a new inversion approach to incorporate the well-trained GANs as effective prior to a variety of image processing tasks. In particular, to invert a given GAN model, we employ multiple latent codes to generate multiple feature maps at some intermediate layer of the generator, then compose them with adaptive channel importance to output the final image. Such an over-parameterization of the latent space significantly improves the image reconstruction quality, outperforming existing GAN inversion methods. The resulting high-fidelity image reconstruction enables the trained GAN models as prior to many real-world applications, such as image colorization, super-resolution, image inpainting, and semantic manipulation. We further analyze the properties of the layer-wise representation learned by GAN models and shed light on what knowledge each layer is capable of representing.
READ FULL TEXT