From A to Z: Supervised Transfer of Style and Content Using Deep Neural Network Generators

03/07/2016
by   Paul Upchurch, et al.
0

We propose a new neural network architecture for solving single-image analogies - the generation of an entire set of stylistically similar images from just a single input image. Solving this problem requires separating image style from content. Our network is a modified variational autoencoder (VAE) that supports supervised training of single-image analogies and in-network evaluation of outputs with a structured similarity objective that captures pixel covariances. On the challenging task of generating a 62-letter font from a single example letter we produce images with 22.4 ground truth than state-of-the-art.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/13/2021

Multiple Style Transfer via Variational AutoEncoder

Modern works on style transfer focus on transferring style from a single...
research
05/21/2021

An Optical physics inspired CNN approach for intrinsic image decomposition

Intrinsic Image Decomposition is an open problem of generating the const...
research
04/27/2016

Image Colorization Using a Deep Convolutional Neural Network

In this paper, we present a novel approach that uses deep learning techn...
research
09/30/2017

Variational Grid Setting Network

We propose a new neural network architecture for automatic generation of...
research
11/29/2018

Learning to Separate Multiple Illuminants in a Single Image

We present a method to separate a single image captured under two illumi...
research
03/13/2016

Learning Typographic Style

Typography is a ubiquitous art form that affects our understanding, perc...
research
07/02/2020

Deep Single Image Manipulation

Image manipulation has attracted much research over the years due to the...

Please sign up or login with your details

Forgot password? Click here to reset