Style transfer renders the content of a real photograph in the style of an artist using either a single style sample [gatys2016] or a set of images [sanakoyeu2018styleaware]. Initial work on style transfer by Gatys et al. [gatys2015neural]
proposed a method which exploits a deep CNN (Convolutional Neural Network) pretrained on a large dataset of natural images. Their costly computational optimization process has been replaced by an efficient encoder-decoder architecture in recent works[johnson, sanakoyeu2018styleaware, dumoulin2016learned, Babaeizadeh2018AdjustableRS, Gupta2017CharacterizingAI] that efficiently generate the stylized output in a single feed-forward pass. While [johnson] has proven that an encoder-decoder architecture is both fast and effective for transferring style, it acts as a black-box model, lacking interpretability and accurate control of style injection: content transformation is performed indirectly, meaning there is no explicit control which part of the network carries out the stylization of photos and to what extend. To address this issue, [sanakoyeu2018styleaware] introduced a fixpoint loss that ensures stylization has converged and reached a fixpoint after one feed-forward pass. This style-aware content loss forces the stylization to take place in the decoder. However, the main issue remains: The decoder alters style, synthesizes the stylized image, and upsamples it. All these individual tasks cannot be learned and controlled individually.
As a remedy, we introduce a novel content transformation block between encoder and decoder allowing control over stylization and achieving a style-aware editing of content images. We force the encoder to explicitly extract content information; the content transformation block then modifies the content information in a manner appropriate to the artist’s style. Eventually the decoder superimposes the style on the altered content representation. Our approach measures the content similarity between the content target image and stylized image before and after the transformation.
In contrast to previous work, stylization should be object specific and depending on the underlying object, the style transformation needs to adapt. The Cubist style of Picasso, for example, tends to reduce the human nose to a simple triangle or distorts the location of the eyes. Therefore, we further investigate, if we can achieve an object-specific alteration. We utilize similar content appearing in photographs and style samples to learn how style alters content details. We show that by using a prominent, complex, and diverse object class, i.e., persons, our model can learn how details are to be altered in a content-and style-aware manner. Moreover, the model learns to generalize beyond this one particular object class to diverse content. This is crucial to stylize also modern objects like computers which an artist like Monet never painted. In addition, we propose a local feature normalization layer to reduce the number of artifacts in stylized images, significantly improving results when moving to other image collections (i.e. from Places365 [zhou2017places]
to ImageNet[ImageNet]) and increasing the image resolution. To validate the performance of our approach, we perform various qualitative and quantitative evaluations of stylized images and also demonstrate the applicability of our method to videos. Additional results can be found on the project page.
2 Related Work
Texture synthesis Neural networks were long used for texture synthesize [gatys2015texture]; feed-forward networks then enable a fast synthesis, however these methods often display a lack of diversity and quality [johnson, ulyanov2016texture]. To circumvent this issue, [li2017diversified] propose a deep generative feed-forward network, which allows to synthesize multiple textures within one single network. [gatys2017controlling] has demonstrated how control over spatial location, color and across spatial scale leads to enhanced stylized images, where regions are altered by different styles; control over style transfer has been extended to stroke sizes [jing2018stroke]. [risser2017stable] used a multiscale synthesis pipeline for spatial control and to improve texture quality and stability.
Separating content and style The integration of localized style losses improved the separation of content and style. In order to separate and recombine style and content in an image, works have utilized low-level features for texture transfer and high-level information to represent content using neural networks [gatys2016]. [collomosse2017sketching, bautista2016cliquecnn, patrick_esser, wilber2017bam] focused on distinguishing between different contents, styles and techniques in the latent space; to translate an image to another image is a vision problem, where the mapping between input and output image relies on aligned pairs. To avoid the need for paired examples, [zhu2017unpaired] presented an adversarial loss coupled with a cycle consistency loss to effectively assign two images. On the basis of [zhu2017unpaired], [sanakoyeu2018styleaware] has proposed an approach, where a style-aware content loss helps to focus on those content details relevant for a style. A combination of generative Markov random field (MRF) models and deep convolutional neural networks have been used for the task of synthesizing content of photographs and artworks [li2016combining].
Real-time and super-resolution
The processing time of style transfer and the resolution of images have been further addressed. Scholars aimed to achieve stylization in real time and in super-resolution using an unsupervised training approach, where either neural network features and statistics compute the acquired loss function[johnson] or a multiscale network is employed [ulyanov2016texture]. To achieve a better quality for stylized images in high resolution, [wang2017multimodal] propose a multimodal convolutional network, which performs a hierarchical stylization by utilizing multiple losses of increasing scales.
Stylizing videos While these works have approached the task of style transfer for input photographs, others concentrated on transferring artistic style to videos [ruder2016artistic, huang2017real, sanakoyeu2018styleaware, ruder2018artistic], using feed-forward style transfer networks [chen2017stylebank] or networks, which do not rely on optical flow at test time [huang2017real] to improve the consistency of stylization.
Let be a collection of images that defines a style. We extract the very essence of an artistic style presented in and learn to transfer it onto images from a different dataset , such as photos. This formulation resembles a typical unsupervised image translation problem, which requires a generator (usually consisting of the encoder and decoder ) and a discriminator trained against each other: one mimics the target distribution , the other one distinguishes between the authentic sample and the stylized sample for . Hence, we can extract the style by solving the min-max optimization task for the standard adversarial loss:
Let be additional content information that is easily available, i.e., we utilize a simple coarse scene label of the image . Now the discriminator should not only discern real from synthesized art. It should also enforce that the scene information is retained in by the stylization process,
In contrast to a GAN framework that generates an image from a random vector, style transfer not only requires to stylize a real input photograph but also to retain the content of the input image after stylization. The simplest solution would be to enforce a per-pixel similarity between the input and stylized image :
However, this loss alone would counter the task of stylization, since the image should not be the same afterwards on a per-pixel basis. Previous work [johnson, gatys2015neural] has utilized a pretrained perceptual loss [vgg]. Since this loss is pretrained on an image dataset unrelated to any specific style, it cannot account for the characteristic way in which an artist alters content. Rather, we enforce the stylization to have reached a fixpoint, meaning that another round of stylization should not further alter the content. The resulting fixpoint loss measures the residual in the style-specific encoding space ,
3.1 Content Transformation Block
While a painting of an artist is associated with one style, it is noticeable that style affects image regions differently: to emphasize the importance of an individual object, artists would use a more expressive brushstroke or deform it to a higher degree. Therefore, we do not only want to learn a simple stylization but a content-specific stylization. Thus each content detail must be stylized in a manner specific to this particular content category. This means that a stylized human figure should resemble how an artist has painted the figure in a specific style and not an arbitrary object, such as a vase or a chair. We enforce this capability by pulling images of similar content – but from different domains(art and photograph) – closer to each other in the latent space, while keeping dissimilar content images apart from each other. To be more specific, we force the content representation of an input photograph belonging to a specific class to become more similar to the input painting’s content representation of the same class. To achieve this, we introduce a content transformation block transforming the output representation of the encoder . We train this block in the adversarial fashion: the discriminator has to distinguish between the representation of the real artworks’ content and the transformed representation of the input photographs. But since we strive to obtain a content specific stylization, the discriminator
also has to classify the content classof the artwork and the content class of the input photograph . Supplied with the content information discriminator becomes more sensitive to content specific visual clues and enforces the content transformation block to mimic them in an artistic way.
In terms of neural architecture the represents a concatenation of nine “residual blocks”. Each block consists of six consecutive blocks with a skip connection: conv-layer, LFN-layer, lrelu-activation, conv-layer, LFN-layer, lrelu-activation.
3.2 Local Feature Normalization Layer
Many approaches using convolutional networks for image synthesis suffer from domain change (i.e. from photos of landscapes to faces) or synthesis resolution change. As a result, the inference size is often identical to the training size or the visual quality of the results deteriorates when switching to another domain. Reason being that instance normalization layers overfit to image statistics and the layer is not able to generalize to another image. We can improve the ability to generalize by enforcing stronger normalization through our local feature normalization layer. This layer normalizes the input tensor across a group of channels and also acts locally, not seeing the whole tensor but only the vicinity of the spatial location. Formally, for an input tensor, where stands for the samples number, height , width and having channels, we can define a Local Feature Normalization Layer(LFN) with parameters denoting spatial resolution of the normalization window and - number of channels across which we normalize:
To simplify the notation, we first define a subset of the tensor around with a spatial window of size and across a group of neighbouring channels:
Finally, we can write out the expression for the Local Feature Normalization Layer applied to tensor as:
In this equation, similar to the Instance Normalization Layer [instancenorm], parameters denote vectors of trainable parameters and represent how to scale and shift each channel; those are learned jointly with other weights of the network via back-propagation. However, in practice the computation of mean and std of a large tensor could be a laborious task, so we compute these values only at the selected locations
and interpolate for others.
3.3 Training Details
The training dataset is the union of the Places365 dataset [zhou2017places] and the COCO dataset [COCO], such that for a tuple where is a photograph, is a scene class if is from the Places dataset and is a content class if is from the COCO dataset. The second dataset contains tuples where is the artwork and is the content class. We focus on the content classes “person” and a negative class “non-person”. The generator network consists of encoder , transformer block and decoder . We utilize two conditional discriminators and - the former is applied to the input images and stylized outputs. The latter is applied to the content representation obtained by encoder . Given this notation the losses become
Training procedure For variables denoting parameters of the blocks . Training is performed in two alternating optimization steps.
The first step designated to obtain an accurate content extraction in encoder and to learn a convincing style injection by decoder .
The second step is aimed to learn style-specific content editing by the block .
Please see Figure 3 illustrating the alternating steps of the training.
4 Experiments and Discussion
4.1 Stylization Assessment
To measure the quality of the generated stylizations we provide qualitative results of our approach and perform several quantitative experiments which we describe below.
Deception rate. This metric was introduced in [sanakoyeu2018styleaware] to asses how good the target style characteristics are preserved in the generated stylizations. A network pre-trained for artist classification should predict the artist which was used to generate the stylization. The deception rate is then calculated as the fraction of times the network predicted the correct artist. We report the deception rate for our and competing methods in Tab. 1 in the first column, where we can see that our approach outperforms other methods by a significant margin.
Expert and non-expert score. We also perform human evaluation studies to highlight the quality of our stylization results. Given a content image patch, we stylize it with different methods and show results alongside a patch from a real painting to experts and non-experts. Both are asked to guess which one of the shown patches is real. The score is the fraction of times the stylization generated by this method was selected as the real patch. This experiment is performed with experts from art history and people without art education. Results are reported in Tab. 1.
Expert preference score. In addition, we asked art historians to choose which of the stylized images resemble the style of the target artist the most. Then the expert preference score (see Tab. 1) is calculated as the fraction of times the stylizations of the method was selected as the best among the others. The quantitative results in Tab. 1 show that both experts and non-experts prefer our stylizations in comparison to images obtained by other methods.
Content retention evaluation. To quantify how well the content of the original image is preserved, we stylize the ImageNet [ImageNet] validation dataset with different methods and compute the accuracy using pretrained VGG-16 [vgg] and ResNet-152 [resnet] networks averaged across artists. Results presented in Tab. 2 show that the best classification score is achieved on stylizations by CycleGAN [cyclegan] and Gatys et al. [gatys2016], since both methods barely alter the content of the image. However, our main contribution is that we significantly outperform the state-of-the-art AST [sanakoyeu2018styleaware] model on the content preservation task, while still providing more convincing stylization results, measured by the deception rate in Tab. 1.
Qualitative comparison. We compare our method qualitatively with existing approaches in Fig. 5. The reader may also try to guess between real and fake patches generated by our model in Fig.4. More qualitative comparisons between our approach and other methods are available in the supplementary material.
|Johnson et al. [johnson]||0.087||0.013||0.001||0.010|
|Gatys et al. [gatys2016]||0.221||0.088||0.068||0.118|
4.2 Ablation Study
4.2.1 Content Transformation
Relative style-specific content distance. To verify that the image content is transformed in a style-specific manner, we introduce a quantitative measure, called relative style-specific content distance (RSSCD). It measures the ratio between the average distance of the generated image stylizations to the closest artworks and the average distance between all the artworks. Distances are computed using the features of the classification CNN pretrained on ImageNet. Then, RSSCD is defined as
denotes the set of stylizations of the positive content class (e.g., person), denotes the set artworks of the positive content class, and denotes all other artworks (see Fig. 6 for an illustration).
We report the RSSCD for our model with and without . For comparison we also evaluate the state-of-the-art approach AST [sanakoyeu2018styleaware]. Here, we use class ”person” as the positive content class and two pretrained networks as content feature extractors , namely VGG-16 and VGG-19 [vgg]. As can be seen in Tab. 3, the content transformation block significantly decreases the distance between the stylized images and original van Gogh paintings, proving its effectiveness.
We measure how well our model retains the information present in the selected “person” class and compare it to both the model not using and to the AST [sanakoyeu2018styleaware]. We run the Mask-RCNN detector [matterport_maskrcnn_2017] on images from the COCO [COCO] dataset stylized by different methods and compute the accuracy, precision, recall and F1-score. From results ins Tab. 4 we conclude that the proposed block helps to retain visual details relevant for the “person” class.
In Fig. 7 we show stylizations of our method with and without content transformation block. We recognize that applying the content transformation block alters the shape of the human figures in a manner appropriate to van Gogh’s style resulting in curved forms (cf. the crop-outs from original paintings by van Gogh provided in the 4th column of Fig. 7). For small persons, the artist preferred to paint homogeneous regions with very little texture. This is apparent, for example, in the stylized patches in row one and six. Lastly, while van Gogh’s self-portraits display detailed facial features, in small human figures he tended to remove them (see our stylizations in 3rd and 4th rows of Fig. 7). This might be due to his abstract style, which included a fast-applied and coarse brushstroke.
4.2.2 Generalization Ability
The transformer block learns to transform content representation of the photographs of class “person” in such a way that it becomes indistinguishable of the content representation of artworks of the same class. Though transformation has been learned for one only class “person” it can still generalize to other classes. To measure this generalization ability we compute the deception rate[sanakoyeu2018styleaware] and non-expert deception scores on stylized patches for classes “person” and non-“person” separately. The evaluation results are provided in Tab. 5 and indicate improvement of the stylization quality for unseen content.
|rate [sanakoyeu2018styleaware]||rate||deception score||deception score|
4.2.3 Artifacts Removal
To verify the effectiveness of the local feature normalization layer (LFN layer), we perform a visual inspection of learned models and notice prominent artifacts illustrated in Fig. 8. We can observe that especially for plain regions with little structure, the model without a LFN layer often produces unwanted artifacts. In comparison, results obtained with an LFN layer show no artifacts in the same regions. Notably, for a model without an LFN layer the number of artifacts increases proportionally to the resolution of the stylized image.
We introduced a novel content-transformation block designed as a dedicated part of the network to alter an object in a content-and style-specific manner. We utilize objects from the same class in content and style target images to learn how content details need to be transformed. Experiments show that from only one complex object category, our model learns how to stylize details of content in general and thus improves the stylization quality for other objects as well. In addition, we proposed a local feature normalization layer, which significantly reduces the number of artifacts in stylized images, especially when increasing the image resolution or applying our model to previously unseen image types (photos of faces, road scenes etc.). The experimental evaluation showed that both art experts and persons without specific art education preferred our method to others. Our model outperforms existing state-of-the-art methods in terms of stylization quality in both objective and subjective evaluations, also enabling a real-time and high definition stylization of videos.
This work has been supported by a hardware donation from NVIDIA Corporation.
Cezanne: fake, real, fake, fake, real
van Gogh: real, fake, real, fake, fake
Monet: fake, real, fake, real, fake
Kirchner: real, fake, fake, fake, real
Morisot: fake, real, fake, real, fake.
6 Additional Visual Comparison
In this supplementary material, we present additional comparisons with existing style transfer methods for the following artists: Berthe Morisot, Claude Monet, Ernst Ludwig Kirchner, Pablo Picasso, Paul Cezanne, Paul Gauguin, Vincent van Gogh, and Wassily Kandinsky. Comparisons are presented in Fig. 9 and Fig. 10. We observe that while providing better stylization than the state-of-the-art AST [sanakoyeu2018styleaware] method, we also retain the content of images better and produce no artifacts; please zoom in for details. All results are generated in resolution with pixels as the minimal side of the image.
We also stylized two random videos from the internet to show that our method is able to produce real-time high definition stylization of videos, also free of flickering. As input we took two fragments, each minutes long, from the video Provence: Legendary Light, Wind, and Wine at timepoints 7:02 and 10:10 and one entire video Chaplin Modern Times-Factory Scene (late afternoon). For best viewing experience, please watch all videos in 4K resolution, since the quality drops significantly due to YouTube’s compression algorithms otherwise. For video stylization, we provide a comparison between our method and the AST [sanakoyeu2018styleaware]. In addition, to visualize the necessity of the Content Transformation Block , we run a side-by-side stylization of our model with and without extra training of block. We notice that there is a difference in the way how content is retained and also how parts of the image are highlighted. Our model with block achieves better preservation of human figures, especially at smaller scale. The links to the playlists: 1, fragment 2 and fragment 3.
7 Implementation Details
7.1 Network Architecture Notation
Our generator network consists of three consequent blocks: encoder , content transformation block and decoder . Besides that, we have two discriminators: and . For brevity we use the following naming conventions:
conv--stride- denotes a convolutional layer with kernel size
LFN-G--W- denotes a Local Feature Normalization Layer group size and window size ;
upscale- denotes an upscaling layer that consists of nearest neighbor; upscaling is done by a factor of 2, followed by a convolutional layer with kernel size and stride .
ResBlock- denotes a residual block that consists of two convolutional layers with kernel size and stride followed by LFN-G-32-W-32;
cs-k-LFN---LReLU denote a convolution with stride and filters followed by LFN-G--W- layer and LReLU with slope ;
All convolutional layers use reflection padding.
We describe the architecture of the encoder and the decoder in Tab. 6.
|Input ( image)||-|
Description of the encoder and the decoder architecture. ReLU layers are omitted for brevity.
7.1.1 Content Transformation Block
Content transformation block consists of consequent residual blocks ResBlock- with each convolution having kernels.
7.1.2 Architecture of the Discriminators and
Both discriminators described in Tab. 7 have a double purpose: predicting the class of the input image and predicting domain of the image (real painting or not). On the one hand, the discriminator takes images as an input and predicts scene class and domain. The discriminator , on the other hand, takes a feature vector as an input and predicts class (person/non-person) and domain. Both predictions are given as two values in the last line of architecture in Tab. 7.
For the discriminator conditions are more complicated: a class of the image scene is predicted in the final layer, see Tab. 7. To obtain domain predictions, we attach a convolutional layer of one single kernel to the outputs of the th and the th convolutional layers of the discriminator and compute average mean on its outputs. This value is used as domain prediction.
|Input ( tensor)||Input ( image)|
|global avg pooling||cs--LFN---LReLU|
7.2 Training Details
Training process consists of two stages: a randomly initialized network is trained at first on patches of size pix cropped from the real paintings and patches cropped from the photographs with scene class label for iterations with batch size . Afterwards, we continue training procedure on patches of size pix for another iterations with batch size on the two aforementioned datasets. At this stage, we also train on patches of person and non-person class extracted from both paintings and photographs dataset. At each training stage we use two different Adam [adam] optimizers both with learning rate : one for discriminators and another for encoder , transformation block and decoder . To avoid that the generator is incapable of fooling the discriminator, we impose a constraint that the discriminator wins in of the cases [gans_overwiev, sanakoyeu2018styleaware]. To achieve this, we compute a running average of the discriminator’s accuracy; if the accuracy is we update the discriminator , otherwise we update the generator.