Perceptual Losses for Real-Time Style Transfer and Super-Resolution

03/27/2016 ∙ by Justin Johnson, et al. ∙ 0

We consider image transformation problems, where an input image is transformed into an output image. Recent methods for such problems typically train feed-forward convolutional neural networks using a per-pixel loss between the output and ground-truth images. Parallel work has shown that high-quality images can be generated by defining and optimizing perceptual loss functions based on high-level features extracted from pretrained networks. We combine the benefits of both approaches, and propose the use of perceptual loss functions for training feed-forward networks for image transformation tasks. We show results on image style transfer, where a feed-forward network is trained to solve the optimization problem proposed by Gatys et al in real-time. Compared to the optimization-based method, our network gives similar qualitative results but is three orders of magnitude faster. We also experiment with single-image super-resolution, where replacing a per-pixel loss with a perceptual loss gives visually pleasing results.

READ FULL TEXT

Authors

page 2

page 6

page 7

page 10

page 11

page 13

page 14

Code Repositories

fast-neural-style

Fast neural style in tensorflow based on http://arxiv.org/abs/1603.08155


view repo

Style-Tranfer

Implementation of original style transfer paper (Gatys et al)


view repo

Brouhaha

A Deep Learning toolkit based on iOS Metal


view repo

Artistic-Style-Transfer-using-Keras-Tensorflow

Art to Image Style Transfer using Keras and Tensorflow.


view repo

MeuralPaint

TensorFlow implementation of CNN fast neural style transfer ⚡️ ? ?


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.