Code to reproduce the results from the paper "Controlling Perceptual Factors in Neural Style Transfer" (https://arxiv.org/abs/1611.07865).
Neural Style Transfer has shown very exciting results enabling new forms of image manipulation. Here we extend the existing method to introduce control over spatial location, colour information and across spatial scale. We demonstrate how this enhances the method by allowing high-resolution controlled stylisation and helps to alleviate common failure cases such as applying ground textures to sky regions. Furthermore, by decomposing style into these perceptual factors we enable the combination of style information from multiple sources to generate new, perceptually appealing styles from existing ones. We also describe how these methods can be used to more efficiently produce large size, high-quality stylisation. Finally we show how the introduced control measures can be applied in recent methods for Fast Neural Style Transfer.READ FULL TEXT VIEW PDF
Recently, in the community of Neural Style Transfer, several algorithms ...
Image style transfer is one of the computer vision applications related ...
Neural style transfer is an emerging technique which is able to endow
We present a flexible illustrative line style model for the visualizatio...
Transferring artistic styles onto everyday photographs has become an
Artistically controlling fluid simulations requires a large amount of ma...
A popular series of style transfer methods apply a style to a content im...
Code to reproduce the results from the paper "Controlling Perceptual Factors in Neural Style Transfer" (https://arxiv.org/abs/1611.07865).
Example-based style transfer is a major way to create new, perceptually appealing images from existing ones. It takes two images and as input, and produces a new image applying the style of to the content of . The concepts of “style” and “content” are both expressed in terms of image statistics; for example, two images are said to have the same style if they embody the same correlations of specific image features. To provide intuitive control over this process, one must identify ways to access perceptual factors in these statistics.
In order to identify these factors, we observe some of the different ways that one might describe an artwork such as Vincent van Gogh’s A Wheatfield with Cypresses (Fig. 2(c)). First, one might separately describe different styles in different regions, such as in the sky as compared to the ground. Second, one might describe the colour palette, and how it relates to the underlying scene, separately from factors like image composition or brush stroke texture. Third, one might describe fine-scale spatial structures, such as brush stroke shape and texture, separately from coarse-scale structures like the arrangements of strokes and the swirly structure in the sky of the painting. These observation motivates our hypothesis: image style can be perceptually factorised into style in different spatial regions, colour and luminance information, and across spatial scales, making them meaningful control dimensions for image stylisation.
Here we build on this hypothesis to introduce meaningful control to a recent image stylisation method known as Neural Style Transfer 
in which the image statistics that capture content and style are defined on feature responses in a Convolutional Neural Network (CNN). Namely, we introduce methods for controlling image stylisation independently in different spatial regions (Fig. 1(b)), for colour and luminance information (Fig. 1(c)) as well as on different spatial scales (Fig. 1(d)). We show how they can be applied to improve Neural Style Transfer and to alleviate some of its common failure cases. Moreover, we demonstrate how the factorisation of style into these aspects can gracefully combine style information from multiple images and thus enable the creation of new, perceptually interesting styles. We also show a method for efficiently rendering high-resolution stylisations using a coarse-to-fine approach that reduced optimisation time by an approximate factor of . Finally, we show that in addition to the original optimisation-based style transfer, these control methods can also be applied to recent fast approximations of Neural Style Transfer [13, 23]
There is a large body of work on image stylisation techniques. The first example-based technique was Image Analogies , which built on patch-based texture synthesis techniques [4, 26]. This method introduced stylisation based on an example painting, as well as ways to preserve colour, and to control stylisation of different regions separately. The method used a coarse-to-fine texture synthesis procedure for speed . Since then, improvements to the optimisation method and new applications [20, 6] have been proposed. Patch-based methods have also been used with CNN features [16, 2], leading to improved texture representations and stylisation results. Scale control has been developed for patch-based texture synthesis  and many other techniques have been developed for transferring colour style . There are also many procedural stylisation techniques that provide extensive user control in the non-photorealistic rendering literature, e.g., [1, 15, 18]. These procedural methods provide separate controls for adjusting spatial variation in styles, colour transformation, and brush stroke style, but cannot work from training data.
More recently, Neural Style Transfer  has demonstrated impressive results in example-based image stylisation. The method is based on a parametric texture model [14, 10, 19] defined by summary statistics on CNN responses  and appears to have several advantages over patch-based synthesis. Most prominently, during the stylisation it displays a greater flexibility to create new image structures that are not already present in the source images .
However, the representation of image style within the parametric neural texture model  allows far less intuitive control over the stylisation outcome than patch-based methods. The texture parameters can be used to influence the stylisation but their interplay is extremely complex due to the complexity of the deep representations they are defined on. Therefore it is difficult to predict their perceptual effect on the stylisation result. Our main goal in this work is to introduce intuitive ways to control Neural Style Transfer to combine the advantages of that method with the more fine-grained user control of earlier stylisation methods. Note that concurrent work  independently developed a similar approach for spatial control as presented here.
The Neural Style Transfer method  works as follows. We define a content image and a style image with corresponding feature representations and in layer of a CNN. Each column of is a vectorised feature map and thus where is the number of feature maps in layer and is the product of height and width of each feature map. Note that while is independent of the input image, depends on the size of the input image.
Neural Style Transfer generates a new image that depicts the content of image in the style of image
by minimising following loss function with respect to
where the content term compares feature maps at a single layer :
and the style term compares a set of summary statistics:
where is the Gram Matrix of the feature maps in layer in response to image x. As in the original work , we use the VGG-19 Network and include “conv4_2” as the layer for the image content and Gram Matrices from layers “conv1_1”,“conv2_1”,“conv3_1”,“conv4_1”,“conv5_1” as the image statistics that model style.
We first introduce ways to spatially control Neural Style Transfer. Our goal is to control which region of the style image is used to stylise each region in the content image. For example, we would like to apply one style to the sky region and another to the ground region of an image to either avoid artefacts (Fig. 2(d),(e)) or to generate new combinations of styles from multiple sources (Fig. 2(f)). We take as input spatial guidance channels for both the content and style image (small insets in (Fig. 2(a)-(c)). Each of these is an image map of values in specifying which styles should be applied where: regions where the content guidance channel is equal to should get the style from regions where the style guidance channel is . When there are multiple style images, the regions index over all the example images. The guidance channels are propagated to the CNN to produce guidance channels for each layer. This can be done by simple re-sampling or more involved methods as we explain later in this section. We first discuss algorithms for synthesis given the guidance maps.
In the first method we propose, we multiply the feature maps of each layer included in the style features with guidance channels and compute one spatially guided Gram Matrix for each of the regions in the style image. Formally we define a spatially guided feature map as
Here is the ith
column vector of, and denotes element-wise multiplication. The guidance channel is vectorised and can be either a binary mask for hard guidance or real-valued for soft guidance. We normalise such that . The guided Gram Matrix is then
Each guided Gram Matrix is used as the optimisation target for the corresponding region of the content image. The contribution of layer to the style loss is then:
where is a weighting factor that controls the stylisation strength in the corresponding region .
An important use for guidance channels is to ensure that style is transferred between regions of similar scene content in the content and style image. For example, Figure 2 shows an example in which the sky in the content image has bright clouds, whereas the sky in the style image has greyish clouds; as a result, the original style transfer stylises the sky with a bright part of the ground that does not match the appearance of the sky. We address this by dividing both images into a sky and a ground region (Fig. 2(a),(b) small insets) and require that the sky and ground regions from the painting are used to stylise the respective regions in the photograph (Fig. 2(e)).
Given the input guidance channel , we need to first propagate this channel to produce guidance channels for each layer. The most obvious approach would be to down-sample
to the dimensions of each layer’s feature map. However, we often find that doing so fails to keep the desired separation of styles by region, e.g., ground texture still appears in the sky. This is because neurons near the boundaries of a guidance region can have large receptive fields that overlap into the other region. Instead we use an eroded version of the spatial guiding channels. We enforce spatial guidance only on the neurons whose receptive field is entirely inside the guidance region and add another global guidance channel that is constant over the entire image. We found that this soft spatial guidance usually yields better results. For further details on the creation of guidance channels, see the Supplementary Material, section 1.1.
Another application of this method is to generate a new style by combining the styles from multiple example images. Figure 2(f) shows an example in which the region guidance is used to use the sky style from one image and the ground style from another. This example demonstrates the potential of spatial guidance to combine many example styles together to produce new stylisations.
Alternatively, instead of computing a Gram Matrix for each guidance channel, we can also just stack the guidance channels with the feature maps as it is done in  to spatially guide neural patches . The feature representation of image x in layer is then and . Now the Gram Matrix includes correlations of the image features with the non-zero entries of the guidance channels and therefore encourages that the features in region of the style image are used to stylise region in the content image. The contribution of layer to the style loss is simply
This is clearly more efficient than the method presented in Section 4.1. Instead of computing and matching Gram Matrices one only has to compute one Gram Matrix with additional channels. Nevertheless, this gain in efficiency comes at the expense of texture quality. The additional channels in the new Gram Matrix are the sums over each feature map spatially weighted by the guidance channel.
Hence this method actually interpolates between matching the original global Gram Matrix stylisation and the spatially weighted sums over the feature maps. While the feature map sums also give a non-trivial texture model, their capacity to model complex textures is limited. In practice we find that this method can often give decent results but also does not quite capture the texture of the style image – as would be expected from the inferior texture model. Results and comparisons can be found in the Supplementary Material, section 1.2.
The colour information of an image is an important perceptual aspect of its style. At the same time it is largely independent of other style aspects such as the type of brush strokes used or dominating geometric shapes. Therefore it is desirable to independently control the colour information in Neural Style Transfer. A prominent use case for such control is colour preservation during style transfer. When stylising an image using Neural Style Transfer, the output also copies the colour distribution of the style image, which might be undesirable in many cases (Fig. 3(c)). For example, the stylised farmhouse has the colours of the original van Gogh painting (Fig. 3(c)), whereas one might prefer the output painting to preserve the colours of the farmhouse photograph. In particular, one might imagine that the artist would have used the colours of the scene if they were to paint the farmhouse. Here we present two simple methods to preserve the colours of the source image during Neural Style Transfer — in other words, to transfer the style without transferring the colours. We compare two different approaches to colour preservation: colour histogram matching and luminance-only transfer (Fig. 3(d,e)).
In the first method we perform style transfer only in the luminance channel, as done in Image Analogies . This is motivated by the observation that visual perception is far more sensitive to changes in luminance than in colour .
The modification is simple. The luminance channels and are first extracted from the style and content images. Then the Neural Style Transfer algorithm is applied to these images to produce an output luminance image . Using a colour space that separates luminance and colour information, the colour information of the content image is combined with to produce the final colour output image (Fig. 3(d)).
If there is a substantial mismatch between the luminance histogram of the style and the content image, it can be helpful to match the histogram of the style luminance channel to that of the content image
before transferring the style. For that we simply match mean and variance of the content luminance. Letand be the mean luminances of the two images, and and
be their standard deviations. Then each luminance pixel in the style image is updated as:
The second method we present works as follows. Given the style image , and the content image , the style image’s colours are transformed to match the colours of the content image. This produces a new style image that replaces as input to the Neural Style Transfer algorithm. The algorithm is otherwise unchanged.
The one choice to be made is the colour transfer procedure. There are many colour transformation algorithms to choose from; see  for a survey. Here we use linear methods, which are simple and effective for colour style transfer.
Given the style image, each RGB pixel is transformed as:
where A is a matrix and b is a 3-vector. This transformation is chosen so that the mean and covariance of the RGB values in the new style image match those of  (Appendix B). In general, we find that the colour matching method works reasonably well with Neural Style Transfer (Fig. 3(e)), whereas gave poor synthesis results for Image Analogies . Furthermore, the colour histogram matching method can also be used to better preserve the colours of the style image. This can substantially improve results for cases in which there is a strong mismatch in colour but one rather wants to keep the colour distribution of the style image (for example with pencil drawings or line art styles). Examples of this application can be found in the Supplementary Material, section 2.2.
In conclusion, both methods give perceptually-interesting results but have different advantages and disadvantages. The colour-matching method is naturally limited by how well the colour transfer from the content image onto the style image works. The colour distribution often cannot be matched perfectly, leading to a mismatch between the colours of the output image and that of the content image.
In contrast, the luminance-only transfer method preserves the colours of the content image perfectly. However, dependencies between the luminance and the colour channels are lost in the output image. While we found that this is usually very difficult to spot, it can be a problem for styles with prominent brushstrokes since a single brushstroke can change colour in an unnatural way. In comparison, when using full style transfer and colour matching, the output image really consists of strokes which are blotches of paint, not just variations of light and dark. For a more detailed discussion of colour preservation in Neural Style Transfer we refer the reader to the Supplementary Material, section 2.1.
In this section, we describe methods for mixing different styles at different scales and efficiently generating high-resolution output with style at desired scales.
First we introduce a method to control the stylisation independently on different spatial scales. Our goal is to pick separate styles for different scales. For example, we want to combine the fine-scale brushstrokes of one painting (Fig. 4(b), Style I) with the coarse-scale angular geometric shapes of another image (Fig. 4(b), Style II).
We define the style of an image at a certain scale as the distribution of image structures in image neighbourhoods of a certain size . In that sense, the colour separation introduced in the previous section can be thought of a special case of scale separation, since image colours are “structures” on one-pixel neighbourhoods. To model image style on larger scales, we use the Gram Matrices from different layers in the CNN. In particular, a Gram Matrix at layer represents the second-order statistics of image neighbourhoods of size corresponding to the receptive field size .
Unfortunately, this representation is not factorised over scale. In general, a Gram Matrix at a given spatial scale also captures much of the image information on smaller spatial scales and thus shares a lot of information with the Gram Matrix at a lower layer in the CNN (see Supplementary Material, section 3.1 for more details). Therefore, simply combining Gram Matrices from different scales of different images does not give independent control over the different scales.
Here we show a way to combine scales that avoids this problem. We first create a new style image that combines fine-scale information from one image with coarse scale information from another (Fig. 4(c)). We then use the new style image in the original Neural Style Transfer. We do this by applying Neural Style Transfer from the fine-scale style image to the coarse-scale style image, using only the Gram Matrices from lower layers in the CNN (e.g., only layer “conv1_1” and “conv2_1” in Fig. 4). We initialise the optimisation procedure with the coarse-style image and omit the content loss entirely, so that the fine-scale texture from the coarse-style image will be fully replaced. This is based on the observation that the optimisation leaves images structures intact when they are of larger scale than the style features. While this is not guaranteed, as it depends on the optimiser, we empirically find it to be effective for the L-BFGS method typically used in Neural Style Transfer. The resulting images (Fig. 4(c)) are used as the input to the original Neural Style Transfer to generate a new stylisations of the cityscape photograph. For example, we combine the fine scale of Style I with the coarse scale of Style II to re-paint the angular cubistic shapes in Fig. 4(d) with pronounced brushstrokes (Fig. 4(e)). Or we combine the fine scale of Style II with the coarse scale of Style III to replace the angular shapes by round structures, giving the image a completely different “feel” (compare Fig. 4(d) with Fig. 4(f)).
This method enables the creation of a large set of perceptually appealing, new styles by recombining existing ones in a principled way. It also allows for interesting new ways to interpolate between styles by interpolating across spatial scales. For more examples of new styles and results of interpolating between styles, we refer the reader to the Supplementary Material, sections 3.2 and 3.3.
The existing Neural Style Transfer method does not work well for high-resolution outputs. Since the receptive fields in a CNN have a fixed size, the stylisation outcome depends on the resolution of the input images: stylisation happens only up to the scale of the receptive fields in the output. In practice, we find that for the VGG-19 network, there is a sweet spot around pixels for the size of the input images, such that the stylisation is appealing but the content is well-preserved (Fig. 5(b)). For a high-resolution image, however, the receptive fields are typically very small compared to the image, and so only very small-scale structures are stylised (Fig. 5 (d)).
Here we show that the same scale separation principle from the previous section can be used in order to produce high-resolution outputs with large-scale stylisation. We are given high-resolution content and style images and , both having the same size with pixels in total. We down-sample each image by a factor such that corresponds to the desired stylisation resolution, e.g., for VGG, and then perform stylisation. The output is now low-resolution of size . We can then produce high-resolution output from this image by up-sampling the low-resolution output to pixels, and use this as initialisation for Neural Style Transfer with the original input images and . The style features now capture and can fill-in the high-resolution information from the style image while leaving the coarse-scale stylisation intact (Fig. 5(c)).
This coarse-to-fine procedure has the additional advantage of requiring fewer iterations in the high-resolution optimisation and thus increasing efficiency. In our experiments we used times fewer iterations for the high-resolution optimisation. We also noticed that this technique effectively removes low-level noise that is typical for neural image synthesis. In fact, all figures shown in this paper, except for Fig. 6, were enhanced to high-resolution in that way. The low/high-resolution pairs can be found in the Supplement. Applying this technique iteratively also enables the generation of very high-resolution images that is only limited by the size of the input images and available memory.
A major drawback of Neural Style Transfer is that image generation is relatively slow. Recently, a number of works have shown that one can train a feed-forward CNN to perform stylisation [13, 23, 17]. We now show how to apply the spatial and colour control described above to these Fast Neural Style Transfer methods. Applying scale control to Fast Neural Style Transfer is trivial, as it entails simply training on the new style image that combines multiple scales. We use Johnson’s excellent publicly-available implementation of Fast Neural Style Transfer 333github.com/jcjohnson/fast-neural-style. The networks we train all use the well-tuned default parameters in that implementation including Instance Normalization  (for details see Supplementary Material, section 4). For comparability and to stay in the domain of styles that give good results with Fast Neural Style Transfer, we use the styles published with that implementation.
The simplest way to preserve the colour of the input image is to just use an existing feed-forward stylisation network , and then combine the luminance channel of the stylisation with the colour channels of the content image (Fig. 6(c)). An alternative is to train the feed-forward network exclusively with the luminance channel of the style and content images. This network then produces a luminance image that can be combined with the colour channels from the input content image (Fig. 6(d)). For both methods we match the mean luminance of the output image to that of the content image. In general, we find that colour preservation with the luminance network better combines stylisation with structures in the content image (Fig. 6(c),(d)).
We now describe training a feed-forward network to apply different styles to different regions. We show that this can be done with a surprisingly small modification to Johnson’s training procedure , which we illustrate with the following example. We create the style image by vertically concatenating the Candy and Feathers images shown in Fig. 6(b). Two additional binary guidance channels are added to the style image, i.e., one for the top of the image and one for the bottom. The style loss function is based on the guided Gram Matrices (Eq. 7
). During training, the feed-forward network takes as input the content image and two guidance channels. The input guidance channels are passed to the loss network to evaluate the spatially-guided losses. Surprisingly, we find that the guidance channels can be kept constant during training: during training we required the feed-forward network to always stylise the lower half of the image with one style and the upper half with another. However, the network robustly learns the correspondence between guidance channels and styles, so that at test time we can pass arbitrary masks to the feed-forward network to spatially guide the stylisation (Fig.6(f)-(h)). By providing an automatically-generated figure-ground segmentation  we can create an algorithm that performs fast spatially-varying stylisation automatically. (Fig. 6(g),(h))
In this work, we introduce intuitive ways to control Neural Style Transfer. We hypothesise that image style includes factors of space, colour, and scale, and present ways to access these factors during stylisation to substantially improve the quality and flexibility of the existing method.
One application of the control methods we present is to combine styles in an interpretable fashion. This contrasts with the alternative approach of combining styles by linearly interpolating in the style representation as, for example, is done in the concurrent work of Dumoulin et al. . A possible concern with that approach is that if the directions in the style representation do not correspond to perceptual variables, it becomes difficult to generate appealing new styles. Still, even with our methods the selection of which inputs to combine for aesthetically pleasing results can be challenging. An exciting open research question is to predict what combinations of styles will combine nicely into new, perceptually pleasing styles.
Neural Style Transfer is particularly appealing because it can create new image structures based on the source images. This flexibility arises from the representation of style in terms of spatial summary statistics, in contrast to patch-based methods [12, 20, 6]. However, because it is not clear how the perceptual aspects of style are represented in the summary statistics, it is hard to achieve meaningful parametric control over the stylisation. For that it may be necessary to encourage appropriate factorisations of the CNN representations during network training, for example, to learn representations that factorise the image information over spatial scales. In fact, this touches a fundamental research question in machine vision: to obtain interpretable yet powerful image representations that decompose images into the independent factors of human visual perception.
Perceptual Losses for Real-Time Style Transfer and Super-Resolution.In B. Leibe, J. Matas, N. Sebe, and M. Welling, editors, Computer Vision – ECCV 2016, number 9906 in Lecture Notes in Computer Science, pages 694–711. Springer International Publishing, Oct. 2016. DOI: 10.1007/978-3-319-46475-6_43.