Guided Image Generation with Conditional Invertible Neural Networks

07/04/2019 ∙ by Lynton Ardizzone, et al. ∙ 13

In this work, we address the task of natural image generation guided by a conditioning input. We introduce a new architecture called conditional invertible neural network (cINN). The cINN combines the purely generative INN model with an unconstrained feed-forward network, which efficiently preprocesses the conditioning input into useful features. All parameters of the cINN are jointly optimized with a stable, maximum likelihood-based training procedure. By construction, the cINN does not experience mode collapse and generates diverse samples, in contrast to e.g. cGANs. At the same time our model produces sharp images since no reconstruction loss is required, in contrast to e.g. VAEs. We demonstrate these properties for the tasks of MNIST digit generation and image colorization. Furthermore, we take advantage of our bi-directional cINN architecture to explore and manipulate emergent properties of the latent space, such as changing the image style in an intuitive way.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 5

page 8

page 9

Code Repositories

conditional_INNs

Code for the paper "Guided Image Generation with Conditional Invertible Neural Networks" (2019)


view repo

FrEIA

Framework for Easily Invertible Architectures


view repo

Conditional-Normalizing-Flow

Conditional Generative model (Normalizing Flow) and experimenting style transfer using this model


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.