The recent work of Gatys et al. [4, 5], which used deep neural networks for texture synthesis and image stylization to a great effect, has created a surge of interest in this area. Following an earlier work by Portilla and Simoncelli 
, they generate an image by matching the second order moments of the response of certain filters applied to a reference texture image. The innovation of Gatyset al. is to use non-linear convolutional neural network filters for this purpose. Despite the excellent results, however, the matching process is based on local optimization, and generally requires a considerable amount of time (tens of seconds to minutes) in order to generate a single textures or stylized image.
In order to address this shortcoming, Ulyanov et al.  and Johnson et al.  suggested to replace the optimization process with feed-forward generative convolutional networks. In particular,  introduced texture networks to generate textures of a certain kind, as in , or to apply a certain texture style to an arbitrary image, as in . Once trained, such texture networks operate in a feed-forward manner, three orders of magnitude faster than the optimization methods of [4, 5].
The price to pay for such speed is a reduced performance. For texture synthesis, the neural network of  generates good-quality samples, but these are not as diverse as the ones obtained from the iterative optimization method of . For image stylization, the feed-forward results of [19, 8] are qualitatively and quantitatively worse than iterative optimization. In this work, we address both limitations by means of two contributions, both of which extend beyond the applications considered in this paper.
Our first contribution (section 4) is an architectural change that significantly improves the generator networks. The change is the introduction of an instance-normalization layer which, particularly for the stylization problem, greatly improves the performance of the deep network generators. This advance significantly reduces the gap in stylization quality between the feed-forward models and the original iterative optimization method of Gatys et al., both quantitatively and qualitatively.
Our second contribution (section 3) addresses the limited diversity of the samples generated by texture networks. In order to do so, we introduce a new formulation that learns generators that uniformly sample the Julesz ensemble . The latter is the equivalence class of images that match certain filter statistics. Uniformly sampling this set guarantees diverse results, but traditionally doing so required slow Monte Carlo methods ; Portilla and Simoncelli, and hence Gatys et al
., cannot sample from this set, but only find individual points in it, and possibly just one point. Our formulation minimizes the Kullback-Leibler divergence between the generated distribution and a quasi-uniform distribution on the Julesz ensemble. The learning objective decomposes into a loss term similar to Gatyset al
. minus the entropy of the generated texture samples, which we estimate in a differentiable manner using a non-parametric estimator.
We validate our contributions by means of extensive quantitative and qualitative experiments, including comparing the feed-forward results with the gold-standard optimization-based ones (section 5). We show that, combined, these ideas dramatically improve the quality of feed-forward texture synthesis and image stylization, bringing them to a level comparable to the optimization-based approaches.
2 Background and related work
Informally, a texture is a family of visual patterns, such as checkerboards or slabs of concrete, that share certain local statistical regularities. The concept was first studied by Julesz , who suggested that the visual system discriminates between different textures based on the average responses of certain image filters.
The work of  formalized Julesz’ ideas by introducing the concept of Julesz ensemble. There, an image is a real function defined on a discrete lattice and a texture is a distribution over such images. The local statistics of an image are captured by a bank of (non-linear) filters , , where denotes the response of filter at location on image . The image is characterized by the spatial average of the filter responses . The image is perceived as a particular texture if these responses match certain characteristic values
. Formally, given the loss function,
the Julesz ensemble is the set of all texture images
that approximately satisfy such constraints. Since all textures in the Julesz ensemble are perceptually equivalent, it is natural to require the texture distribution
to be uniform over this set. In practice, it is more convenient to consider the exponential distribution
where is a temperature parameter. This choice is motivated as follows : since statistics are computed from spatial averages of filter responses, one can show that, in the limit of infinitely large lattices, the distribution is zero outside the Julesz ensemble and uniform inside. In this manner, eq. 2 can be though as a uniform distribution over images that have a certain characteristic filter responses .
Note also that the texture is completely described by the filter bank and their characteristic responses . As discussed below, the filter bank is generally fixed, so in this framework different textures are given by different characteristics .
For any interesting choice of the filter bank , sampling from eq. 2 is rather challenging and classically addressed by Monte Carlo methods . In order to make this framework more practical, Portilla and Simoncelli 
proposed instead to heuristically sample from the Julesz ensemble by the optimization process
If this optimization problem can be solved, the minimizer is by definition a texture image. However, there is no reason why this process should generate fair samples from the distribution . In fact, the only reason why eq. 3 may not simply return always the same image is that the optimization algorithm is randomly initialized, the loss function is highly non-convex, and search is local. Only because of this eq. 3 may land on different samples on different runs.
Deep filter banks.
Constructing a Julesz ensemble requires choosing a filter bank . Originally, researchers considered the obvious candidates: Gaussian derivative filters, Gabor filters, wavelets, histograms, and similar [20, 16, 21]. More recently, the work of Gatys et al. [4, 5] demonstrated that much superior filters are automatically learned by deep convolutional neural networks (CNNs) even when trained for apparently unrelated problems, such as image classification. In this paper, in particular, we choose for the style loss proposed by . The latter is the distance between the empirical correlation matrices of deep filter responses in a CNN.111Note that such matrices are obtained by averaging local non-linear filters: these are the outer products of filters in a certain layer of the neural network. Hence, the style loss of Gatys et al. is in the same form as eq. 1.
The texture generation method of Gatys et al.  can be considered as a direct extension of the texture generation-by-minimization technique (3) of Portilla and Simoncelli . Later, Gatys et al.  demonstrated that the same technique can be used to generate an image that mixes the statistics of two other images, one used as a texture template and one used as a content template. Content is captured by introducing a second loss that compares the responses of deep CNN filters extracted from the generated image and a content image . Minimizing the combined loss yields impressive artistic images, where a texture , defining the artistic style, is fused with the content image .
Feed-forward generator networks.
For all its simplicity and efficiency compared to Markov sampling techniques, generation-by-optimization (3) is still relatively slow, and certainly too slow for real-time applications. Therefore, in the past few months several authors [8, 19] have proposed to learn generator neural networks that can directly map random noise samples to a local minimizer of eq. 3. Learning the neural network amounts to minimizing the objective
While this approach works well in practice, it shares the same important limitation as the original work of Portilla and Simoncelli: there is no guarantee that samples generated by would be fair samples of the texture distribution (2). In practice, as we show in the paper, such samples tend in fact to be not diverse enough.
Alternative neural generator methods.
There are many other techniques for image generation using deep neural networks.
. Both FRAME and MMD make the observation that a probability distributioncan be described by the expected values of a sufficiently rich set of statistics . Building on these ideas, [14, 3] construct generator neural networks with the goal of minimizing the discrepancy between the statistics averaged over a batch of generated images and the statistics averaged over a training set . The resulting networks are called Moment Matching Networks (MMN).
An important alternative methodology is based on the concept of Generative Adversarial Networks (GAN; ). This approach trains, together with the generator network , a second adversarial network that attempts to distinguish between generated samples and real samples . The adversarial model can be used as a measure of quality of the generated samples and used to learn a better generator . GAN are powerful but notoriously difficult to train. A lot of research is has recently focused on improving GAN or extending it. For instance, LAPGAN  combines GAN with a Laplacian pyramid and DCGAN  optimizes GAN for large datasets.
3 Julesz generator networks
This section describes our first contribution, namely a method to learn networks that draw samples from the Julesz ensemble modeling a texture (section 2), which is an intractable problem usually addressed by slow Monte Carlo methods [21, 20]. Generation-by-optimization, popularized by Portilla and Simoncelli and Gatys et al., is faster, but can only find one point in the ensemble, not sample from it, with scarce sample diversity, particularly when used to train feed-forward generator networks [8, 19].
Here, we propose a new formulation that allows to train generator networks that sample the Julesz ensemble, generating images with high visual fidelity as well as high diversity.
A generator network 
maps an i.i.d. noise vectorto an image in such a way that is ideally a sample from the desired distribution . Such generators have been adopted for texture synthesis in , but without guarantees that the learned generator would indeed sample a particular distribution.
Here, we would like to sample from the Gibbs distribution (2) defined over the Julesz ensemble. This distribution can be written compactly as , where is an intractable normalization constant.
Denote by the distribution induced by a generator network . The goal is to make the target distribution and the generator distribution as close as possible by minimizing their Kullback-Leibler (KL) divergence:
Hence, the KL divergence is the sum of the expected value of the style loss and the negative entropy of the generated distribution .
The first term can be estimated by taking the expectation over generated samples:
The second term, the negative entropy, is harder to estimate accurately, but simple estimators exist. One which is particularly appealing in our scenario is the Kozachenko-Leonenko estimator . This estimator considers a batch of samples . Then, for each sample , it computes the distance to its nearest neighbour in the batch:
The distances can be used to approximate the entropy as follows:
where is the number of components of the images .
An energy term similar to (6) was recently proposed in  for improving the diversity of a generator network in a adversarial learning scheme. While the idea is superficially similar, the application (sampling the Julesz ensemble) and instantiation (the way the entropy term is implemented) are very different.
We are now ready to define an objective function to learn the generator network . This is given by substituting the expected loss (7) and the entropy estimator (9), computed over a batch of generated images, in the KL divergence (6):
The batch itself is obtained by drawing samples from the noise distribution of the generator. The first term in eq. 10 measures how closely the generated images are to the Julesz ensemble. The second term quantifies the lack of diversity in the batch by mutually comparing the generated images.
The loss function (10
) is in a form that allows optimization by means of Stochastic Gradient Descent (SGD). The algorithm samples a batchat a time and then descends the gradient:
where is the vector of parameters of the neural network
, the tensor imagehas been implicitly vectorized and is the index of the nearest neighbour of image in the batch.
4 Stylization with instance normalization
The work of  showed that it is possible to learn high-quality texture networks that generate images in a Julesz ensemble. They also showed that it is possible to learn good quality stylization networks that apply the style of a fixed texture to an arbitrary content image .
Nevertheless, the stylization problem was found to be harder than the texture generation one. For the stylization task, they found that learning the model from too many example content images , say more than 16, yielded poorer qualitative
results than using a smaller number of such examples. Some of the most significant errors appeared along the border of the generated images, probably due to padding and other boundary effects in the generator network. We conjectured that these are symptoms of a learning problem too difficult for their choice of neural network architecture.
A simple observation that may make learning simpler is that the result of stylization should not, in general, depend on the contrast of the content image but rather should match the contrast of the texture that is being applied to it. Thus, the generator network should discard contrast information in the content image . We argue that learning to discard contrast information by using standard CNN building block is unnecessarily difficult, and is best done by adding a suitable layer to the architecture.
To see why, let be an input tensor containing a batch of images. Let denote its -th element, where and span spatial dimensions, is the feature channel (i.e. the color channel if the tensor is an RGB image), and is the index of the image in the batch. Then, contrast normalization is given by:
It is unclear how such as function could be implemented as a sequence of standard operators such as ReLU and convolution.
On the other hand, the generator network of  does contain a normalization layers, and precisely batch normalization (BN) ones. The key difference between eq. 12 and batch normalization is that the latter applies the normalization to a whole batch of images instead of single ones:
We argue that, for the purpose of stylization, the normalization operator of eq. 12 is preferable as it can normalize each individual content image .
While some authors call layer eq. 12 contrast normalization, here we refer to it as instance normalization (IN) since we use it as a drop-in replacement for batch normalization operating on individual instances instead of the batch as a whole. Note in particular that this means that instance normalization is applied throughout the architecture, not just at the input image—fig. 2 shows the benefit of doing so.
Another similarity with BN is that each IN layer is followed by a scaling and bias operator
. A difference is that the IN layer is applied at test time as well, unchanged, whereas BN is usually switched to use accumulated mean and variance instead of computing them over the batch.
IN appears to be similar to the layer normalization method introduced in  for recurrent networks, although it is not clear how they handle spatial data. Like theirs, IN is a generic layer, so we tested it in classification problems as well. In such cases, it still work surprisingly well, but not as well as batch normalization (e.g. AlexNet  IN has 2-3% worse top-1 accuracy on ILSVRC  than AlexNet BN).
In this section, after discussing the technical details of the method, we evaluate our new texture network architectures using instance normalization, and then investigate the ability of the new formulation to learn diverse generators.
5.1 Technical details
Among two generator network architectures, proposed previously in [19, 8], we choose the residual architecture from  for all our style transfer experiments. We also experimented with architecture from  and observed a similar improvement with our method, but use the one from  for convenience. We call it StyleNet with a postfix BN if it is equipped with batch normalization or IN for instance normalization.
For texture synthesis we compare two architectures: the multi-scale fully-convolutional architecture from  (TextureNetV1) and the one we design to have a very large receptive field (TextureNetV2). TextureNetV2 takes a noise vector of size and first transforms it with two fully-connected layers. The output is then reshaped to a
image and repeatedly upsampled with fractionally-strided convolutions similar to. More details can be found in the supplementary material.
In practice, for the case of , entropy loss and texture loss in eq. 10 should be weighted properly. As only the value of is important for optimization we assume and choose from the set of three values for texture synthesis (we pick the higher value among those not leading to artifacts – see our discussion below). We fix for style transfer experiments. For texture synthesis, similarly to , we found useful to normalize gradient of the texture loss as it passes back through the VGG-19 network. This allows rapid convergence for stochastic optimization but implicitly alters the objective function and requires temperature to be adjusted. We observe that for textures with flat lightning high entropy weight results in brightness variations over the image fig. 7. We hypothesize this issue can be solved if either more clever distance for entropy estimation is used or an image prior introduced.
5.2 Effect of instance normalization
In order to evaluate the impact of replacing batch normalization with instance normalization, we consider first the problem of stylization, where the goal is to learn a generator that applies a certain texture style to the content image using noise as “random seed”. We set for which generator is most likely to discard the noise.
The StyleNet IN and StyleNet BN are compared in fig. 3. Panel fig. 3.a shows the training objective (5) of the networks as a function of the SGD training iteration. The objective function is the same, but StyleNet IN converges much faster, suggesting that it can solve the stylization problem more easily. This is confirmed by the stark difference in the qualitative results in panels (d) end (g). Since the StyleNets are trained to minimize in one shot the same objective as the iterative optimization of Gatys et al., they can be used to initialize the latter algorithm. Panel (b) shows the result of applying the Gatys et al. optimization starting from their random initialization and the output of the two StyleNets. Clearly both networks start much closer to an optimum than random noise, and IN closer than BN. The difference is qualitatively large: panels (e) and (h) show the change in the StyleNets output after finetuning by iterative optimization of the loss, which has a small effect for the IN variant, and a much larger one for the BN one.
Similar results apply in general. Other examples are shown in fig. 4, where the IN variant is far superior to BN and much closer to the results obtained by the much slower iterative method of Gatys et al. StyleNets are trained on images of a fixed sized, but since they are convolutional, they can be applied to arbitrary sizes. In the figure, the top tree images are processed at resolution and the bottom two at . In general, we found that higher resolution images yield visually better stylization results.
While instance normalization works much better than batch normalization for stylization, for texture synthesis the two normalization methods perform equally well. This is consistent with our intuition that IN helps in normalizing the information coming from content image , which is highly variable, whereas it is not important to normalize the texture information, as each model learns only one texture style.
5.3 Effect of the diversity term
Having validated the IN-based architecture, we evaluate now the effect of the entropy-based diversity term in the objective function (10).
The experiment in fig. 5 starts by considering the problem of texture generation. We compare the new high-capacity TextureNetV2 and the low-capacity TextureNetsV1 texture synthesis networks. The low-capacity model is the same as . This network was used there in order to force the network to learn a non-trivial dependency on the input noise, thus generating diverse outputs even though the learning objective of , which is the same as eq. 10 with diversity coefficient , tends to suppress diversity. The results in fig. 5 are indeed diverse, but sometimes of low quality. This should be contrasted with TextureNetV2, the high-capacity model: its visual fidelity is much higher, but, by using the same objective function , the network learns to generate a single image, as expected. TextureNetV2 with the new diversity-inducing objective () is the best of both worlds, being both high-quality and diverse.
The experiment in fig. 6 assesses the effect of the diversity term in the stylization problem. The results are similar to the ones for texture synthesis and the diversity term effectively encourages the network to learn to produce different results based on the input noise.
One difficultly with texture and stylization networks is that the entropy loss weight must be tuned for each learned texture model. Choosing too small may fail to learn a diverse generator, and setting it too high may create artifacts, as shown in fig. 7.
This paper advances feed-forward texture synthesis and stylization networks in two significant ways. It introduces instance normalization, an architectural change that makes training stylization networks easier and allows the training process to achieve much lower loss levels. It also introduces a new learning formulation for training generator networks to sample uniformly from the Julesz ensemble, thus explicitly encouraging diversity in the generated outputs. We show that both improvements lead to noticeable improvements of the generated stylized images and textures, while keeping the generation run-times intact.
-  L. J. Ba, R. Kiros, and G. E. Hinton. Layer normalization. CoRR, abs/1607.06450, 2016.
-  E. L. Denton, S. Chintala, A. Szlam, and R. Fergus. Deep generative image models using a laplacian pyramid of adversarial networks. In NIPS, pages 1486–1494, 2015.
-  G. K. Dziugaite, D. M. Roy, and Z. Ghahramani. Training generative neural networks via maximum mean discrepancy optimization. In UAI, pages 258–267. AUAI Press, 2015.
-  L. Gatys, A. S. Ecker, and M. Bethge. Texture synthesis using convolutional neural networks. In Advances in Neural Information Processing Systems, NIPS, pages 262–270, 2015.
-  L. A. Gatys, A. S. Ecker, and M. Bethge. A neural algorithm of artistic style. CoRR, abs/1508.06576, 2015.
-  I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, and Y. Bengio. Generative adversarial nets. In Advances in Neural Information Processing Systems (NIPS), pages 2672–2680, 2014.
-  A. Gretton, K. M. Borgwardt, M. Rasch, B. Schölkopf, and A. J. Smola. A kernel method for the two-sample-problem. In Advances in neural information processing systems,NIPS, pages 513–520, 2006.
J. Johnson, A. Alahi, and L. Fei-Fei.
Perceptual losses for real-time style transfer and super-resolution.In Computer Vision - ECCV 2016 - 14th European Conference, Amsterdam, The Netherlands, October 11-14, 2016, Proceedings, Part II, pages 694–711, 2016.
-  B. Julesz. Textons, the elements of texture perception, and their interactions. Nature, 290(5802):91–97, 1981.
-  T. Kim and Y. Bengio. Deep directed generative models with energy-based probability estimation. arXiv preprint arXiv:1606.03439, 2016.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. CoRR, abs/1312.6114, 2013.
-  L. F. Kozachenko and N. N. Leonenko. Sample estimate of the entropy of a random vector. Probl. Inf. Transm., 23(1-2):95–101, 1987.
-  A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, pages 1106–1114, 2012.
Y. Li, K. Swersky, and R. S. Zemel.
Generative moment matching networks.
Proc. International Conference on Machine Learning, ICML, pages 1718–1727, 2015.
-  J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. IJCV, 40(1):49–70, 2000.
-  J. Portilla and E. P. Simoncelli. A parametric texture model based on joint statistics of complex wavelet coefficients. IJCV, 2000.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. CoRR, abs/1511.06434, 2015.
-  O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei-Fei. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
-  D. Ulyanov, V. Lebedev, A. Vedaldi, and V. S. Lempitsky. Texture networks: Feed-forward synthesis of textures and stylized images. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1349–1357, 2016.
S. C. Zhu, X. W. Liu, and Y. N. Wu.
Exploring texture ensembles by efficient markov chain monte carloÐtoward a atrichromacyo theory of texture.PAMI, 2000.
-  S. C. Zhu, Y. Wu, and D. Mumford. Filters, random fields and maximum entropy (FRAME): Towards a unified theory for texture modeling. IJCV, 27(2), 1998.