Neural style transfer (Gatys et al., 2015)
, which seeks at rendering the content of one image using the style of another, provides impressive results as it takes advantage of the rich hierarchical representation of images produced by convolutional neural networks (CNN) to quantify the style and content of images. The many ways to manipulate these complex maps, as well as their increasing ease of implementation, have hence underpinned the development of a plethora of successful methods in this area of computational artistic rendering.
Several evaluation techniques have been developed to compare all different methods. On the one hand, many methods focus on quantifying how much a neural style transfer method attains a numerical objective. These are good engineering indicators, but we highlight that they are not necessarily relevant at measuring the quality of the outputs of style transfer algorithm. On the other hand, qualitative evaluation methods typically consist of collecting a large number of subjective impressions on the algorithms’ outputs. It provides average scores on the content preservation and style quality of the algorithms’ outputs but does not reveal the specificity of an algorithm compare to another.
To have a more precise characterization of these algorithms, we introduce a new evaluation methodology based on the predictivity of neural style transfer algorithms and gather a set of paired painting-photographic for this evaluation. The predictivity consists of assessing whether or not the algorithms’ outputs are close to an existing painting when using this painting as the style image and the associated photograph as the content image. This is also a crucial point in the computational creativity perspective as some outputs are deemed interesting while bearing not much resemblance with the initial painting, i.e. with what the painter did.
In addition, when showing to artists some outputs of style transfer algorithms using their own paintings as style images, they often do not recover their practice. However, they sometimes identify inspiring aspects in the various outputs of different algorithms, implicitly acknowledging their computational creativity. This naturally led us to painting processes with artists, who could not only edit groups of style transfer outputs, but use them as basic elements to widen their style. This constructively interlaces the agency attribution of the algorithms part in the creative process.
We further encouraged this complexity by exploring these algorithms in the real world, where the outputs are projected onto a real canvas, the classical space for human painters. Human and machine contributions are then mingled in a single canvas. Interestingly then, to help the observer that seeks to untangle the contribution of each agent, the canvas can be shown together with the various computational suggestions. In such creative processes, the algorithms were experienced as computational catalysts to human creativity, a middle ground between agents that are creative on their own and still technical tools.
We first describe the new methodology for qualitative evaluation of different style transfer algorithms. We then question the relevance of existing quantitative evaluations giving a simple example where improving the quantitative criterion of an algorithm does not improve the quality of the stylization quality of the outputs. We then indicate that some approaches to neural style transfer do not satisfy a basic property, which leads to an instability behaviour that ultimately allows reinforcing the diversity of style transfer outputs.
Based on all these observations, we present various interactive painting experiments between human and style transfer outputs. This leads us to the notion of computational catalysts that help to characterize the algorithms’ contribution in our specific settings.
Evaluating Neural Style Transfer Methods
Neural methods for style transfer originated with the optimization-based technique of Gatys et al. (2015)
which leverages the image features extracted from convolutional neural networks (CNN). To speed up the process as well as to have access to representations of a particular painting style,(Li and Wand, 2016; Ulyanov, 2016; Johnson et al., 2016) then proposed to train a neural network dedicated to a particular style, enabling to do neural style transfer of an image with a single forward pass instead of a full optimization procedure. Later on, universal neural style transfer methods were developed to transfer any kind of style to a content image, again with a single forward pass (Ghiasi et al., 2017; Li et al., 2017; Huang and Belongie, 2017). These approaches are much faster than the optimization-based approaches but they suffer from the well-documented instabilities of neural network (Szegedy et al., 2013). We show that a specific instability that, to the best of our knowledge, has not been pointed out yet, can notably be beneficial as it enlarges the creative possibilities of neural style transfer.
Alternatively, to explore other creative opportunities of such algorithms, several user control methods have been developed using for example semantic correspondences (Lu et al., 2017; Gatys et al., 2017; Kolkin et al., 2019), allowing to hand-tune the colour histograms (Gatys et al., 2017) or the scale of the patterns (Risser et al., 2017). It is also possible to transfer multiple styles (Mroueh, 2019; Cheng et al., 2019) at once.
Many works are still exploring different neural style transfer approaches, for instance working with histogram losses (Risser et al., 2017), using various relaxation of optimal transport (Kolkin et al., 2019; Mroueh, 2019; Kotovenko et al., 2019) or trying to match semantic patterns in content and style images (Zhang et al., 2019a). All these methods achieve impressive plastic results, but they are hard to characterize one w.r.t. the other. They may not yet actually stylize an image in the many ways a human would. We thus study the question of evaluation methods for style transfer.
A natural way of evaluating a neural style transfer method is to measure the content preservation and the stylization quality of the outputs. The variety of possible input images for content and style makes this task difficult in general. For example Gatys succeeds in transferring the style of Van Gogh’s Starry night but the examples shown in figures 1 and 2 show notable artefacts. Such an evaluation can still be done gathering a large number of responses as Kolkin et al. (2019) did to measure the content or style preservation of their method compared to the others. Results showed that their method (STROTSS) offered on average the best trade-off between content and style preservation but does not say in what sense the style and content are better preserved.
To have a systematic and more refined comparison, we propose to study the predictivity of style transfer algorithm: does an algorithm stylize the image in a way a painter would have done? Precisely, when considering a photograph as a content image and a figurative painting of this image as a style image, one can compare the output of the neural style transfer algorithm with the figurative painting and further judge whether the style transfer technique succeeds in predicting the painting, and if not, try to characterize how it differs from it.
Such pairs of photograph and content-preserving paintings are not readily available; landscapes are constantly changing, face portraits are rarely faithful to the original as well as we rarely possess the photograph of the model. Building paintings, however, is a good class of paintings for the proposed study. We thus construct a set of photographic-painting pairs, see Figures 2-3 for instance, focusing on two famous buildings that have inspired numerous artists: the Notre Dame de Paris Cathedral and the Notre Dame de Rouen Cathedral. We gathered pictures of paintings of a three-quarter view of the facade of Notre Dame de Paris Cathedral by Utrillo, Matisse, Luce, Marquet, Barthold, Guillaumin, and Hassam. Particularly interesting for our study, Claude Monet made a series of about forty paintings capturing the facade of Notre Dame de Rouen Cathedral from nearly the same viewpoint at different times of the day and year and under different meteorological and lighting conditions (Kleiner, 2009, p. 656). As some methods can use semantic masks to specify corresponding regions of the content and style image, we add a semantic mask to each pair.
With this set, qualitative evaluation can be done more systematically and less arbitrarily; in the example shown in Figure 3, STROTSS output is qualitatively the closest to the Monet painting, especially for the lightening effect on the door and the left of the portal. Gatys and WCT suffer from spatial inconsistency as the blue sky is replaced by a sunlight halo in the first one and the background is hardly distinguishable in the second one. We release this set together with the outputs of the style transfer algorithms to facilitate and systematize the qualitative evaluation of neural style transfer techniques.
Numerical evaluation methods have the benefit of being more systematic and objective. However, we point here that most neural style transfer evaluation methods are specific to certain algorithms and are not always relevant for the stylization quality of the output.
Other numerical evaluation techniques were proposed; Sanakoyeu et al. (2018)
test whether a pre-trained neural network for artist classification on real paintings succeeds in classifying the artist of the style image based on an algorithm’s output.Jing et al. (2017) consider comparing saliency maps between images since the spatial integrity and coherence of the saliency maps should remain similar after style transfer. Moreover, as neural style transfer relies on a certain quantification of the style based on CNN features, Jing et al. (2017) propose to evaluate how much the optimization objective is achieved in style transfer. We show in the following case that improving the optimization objective is not necessarily related to the visual quality of the output.
Optimization-based neural style transfer methods consist in optimizing the pixels of an image to minimize a loss . This loss is usually the sum of a content loss measuring the content similarity between and the content image and a style loss measuring the style similarity between and the style image . In the STROTSS method, Kolkin et al. (2019) define the style loss as the Earth Movers Distance (EMD) between CNN features of the image and the style image . Given the CNN features of the images , we compute the distance matrix between and and the EMD is defined as the solution of the following optimization problem
Exact EMD computations are too expensive for neural style transfer applications, and a relaxed EMD (REMD) is used in STROTSS. It consists in taking the maximum of two simple lower bounds of the EMD, each obtained removing one of the two linear constraints sets applied on the transport plan
Despite the use of this loose relaxation, the human evaluation done via Amazon Mechanical Turk (AMT) indicates that STROTSS statistically offers the best style/content trade-off compared to the other neural style transfer techniques (Kolkin et al., 2019, §4) in the opinion of the AMT workers. Experiments done with artists confirmed this trend as the artists were mostly impressed by results produced by STROTSS.
The authors mentioned that a better approximation may yield better style transfer results. Sinkhorn-distance (Cuturi, 2013) in its log-domain stabilized version (Schmitzer, 2019) is a good candidate to this purpose. We thus replaced the relaxed earth movers distance REMD by the Sinkhorn earth movers distance
where is the entropy of the transport plan
and is the entropic regularization parameter. The corresponding optimization problem is convex and is solved iteratively with a fixed number of iterations . is an upper-bound of the EMD and it converges to the exact EMD as goes to
. We release a Pytorch(Paszke et al., 2019) implementation of STROTSS including the SEMD.
Figure 4 shows a comparison of experimental results, suggesting that getting much closer to the mathematical quantification of the style does not necessarily lead to more relevant results, and numerical evaluation of how much the mathematical objective is achieved is not essential from a visual perspective.
In the same idea, the instability phenomena that are commonly assumed to be detrimental in the neural networks literature (e.g. adversarial examples), can qualitatively increase the creative possibilities of neural style transfer.
Neural style transfer instabilities have been pointed out by Risser et al. (2017) and Gupta et al. (2017) in the case of real-time style transfer for videos. The aim is to identify and remove the time-inconsistent artefacts that create unpleasing effects. Here we outline instabilities stemming from another type of inconsistency and propose to take advantage of them.
A style transfer method is simply a function that takes as input a style image and a content image and outputs a stylized version of with . It is reasonable when giving such a method the same image as content and style, to expect the image itself, i.e. that satisfies . Let us now consider the following recursion
where is an initial image. Optimization based methods empirically converge to an equilibrium where independently of the initialization. On the opposite, feed-forward approaches to style transfer (Li and Wand, 2016; Ulyanov, 2016; Johnson et al., 2016; Ghiasi et al., 2017; Li et al., 2017; Huang and Belongie, 2017) lead to oscillating sequences around non-trivial (i.e. not a monochrome image) forms, yet typically bearing absolutely no resemblance with the initial image . Since the pixel values are clamped between 0 and 1, colours end up being either saturated or zero, but not uniformly and still revealing specific patterns in Figure 5 for instance. Interestingly also, when starting from very simple images , like a uniform color, for some , the sequence would still show the same type of instability in the long-run, see this video111https://youtu.be/WCJNLWb-H2M for instance.
From the perspective of computational creativity, this seeming failure is interesting. In the first iterations, we observe that some methods produce a series of images progressively stylized. Given a style transfer function , the very same effect happens across all sequences we experimented with. For instance, in Figure 7 we see a distinct tessellation effect in the images on the first row. We use this technique in the interactive painting experiments to produce more diverse and computationally creative style transfer outputs, see image (f) in Figure 12 for instance.
Alternatively the asymptotical regime of the sequences produces surprising animations. The appearing patterns are completely different from one approach to another, but are experimentally the same for different initialisation images and a given method. Sequences are shown in Figure 6, refer to this video222https://youtu.be/gAq1lvb1G1c or this one333https://youtu.be/s87R-9JITvE for a more lively visualisation.
Interactive Painting Experiments
In the previous study, we have questioned the relevance of neural style transfer evaluation. To go beyond comparing techniques, we propose to take advantage of the diversity of the outputs and to use them as a source inspiration for artists.
Some painters have recently explored interactive processes with machines in the real world, particularly in the case of painting. For instance, Chung (2015)
among others, leveraged on the algorithms from artificial intelligence to paint interactively with humans in the real world, where a machine would act on the real canvas via a robotic arm.Cabannes et al. (2019) also explore such an interaction, where the machine does not act but suggests via projection. However, none of these use style transfer algorithms outputs to paint interactively with an artist on the canvas.
We explore that possibility through various series of interactively painted portraits. We describe here various interactive painting experiments inserting outputs of neural style transfer algorithms during the human painting process. In all cases, the algorithms’ creations are projected onto the canvas but never automatically painted, for instance via a robotic arm or a printer. We first describe the experiments on canvas and then motivate the underlying design choices. We finally show how the notion of computational catalyst naturally emerges. Note also that all the paintings revolve around portrait themes.
Editing multiple styles in one portrait.
Neural style transfer outputs are very diverse from one method to another, as outlined in Figure 1 for instance. To edit the creative content of these outputs into a single final artwork, we select a person’s photography as a content image and we transfer the style of previous artists’ painting into this content image using various algorithms. We then show the stylized images to the artist, but not the original content image, which he never sees. He then selects some of the outputs that best resonate with his practice. These style transfer outputs are finally alternatively projected on a canvas for a certain amount of time. Figure 8 is an example of canvas realized according to this process; we complement the final canvas with the various style transfer outputs chosen by the artist. We also explore other variations of that idea, for instance via collage, where the selected outputs are mingled together into a single image which is projected afterwards.
Pixelizing portrait construction.
The motivation of this creative process is to artificially create an interactive loop between the painter and the algorithm. The initial image is projected onto or next to the canvas, which is divided into squares. The painter is then asked to paint sequentially on each square of the canvas. Whenever a square is completed, we use it as a new style image to stylize the initial photographic-like image of the portrait, which is then projected on the canvas, images (a)-(d) in Figure 11 are some of these projected outputs. Anytime then, the painter only sees an interpretation of the photographic portrait by a style transfer algorithm taking the painter’s style in the previous painting. We show one example of a canvas produced in such a way in Figure 11.
Note that the style transfer output should be the machine’s prediction of what the artist would do, provided at least that the previous square contains all the style information of the painter and that the style transfer method is ideal. This remark was then the basis for a gamification exploration of the painting process, where the artist was asked to attempt to fail the machine prediction as much as possible.
This decomposition of the painting process produces painting artworks that are sequential objects. Not only the final canvas is interesting but all its part. Actually, algorithms in image computational creativity are very much less performative at generating images as sequences of brushstrokes as they are at generating images all at once, like with GANs (Goodfellow et al., 2014) for instance. This is simply because paintings are usually not sequential objects. Indeed we very rarely observe all the steps leading to a painting, apart for large quantities and categories of simple sketches (Eitz et al., 2012; Jongejan et al., 2018). Alternatively, computationally inferring the steps of a painting from the final canvas (Xie et al., 2013; Ganin et al., 2018; Nakano, 2019) is not yet very successful. This arguably explained why in painting, compared to other domains such as music, whose artworks are sequential by nature, the computationally creative algorithms are harder to frame in a fully interactive way with humans, hence limiting the ability for a painter to truly interact with machines.
Interactive series of portraits.
We then considered using neural style transfer outputs for series of portraits. We select a photographic portrait and we stylize with a neural style transfer algorithm it using a previous artists’ painting as a style image. We project the stylized image as inspiration for the painter. When the painter has finished the painting, we stylize again the photographic portrait using the painting that has just been painted. We project the new stylized image as the next inspiration, and we repeat the process, typically two or three times. Figure 12-13 present two series of canvas in chronological order. In figure 12 all canvas stem from the same photographic portrait, while in Figure 13 we alternate between two photographic portraits to avoid specialization of the artist to particular content.
We played on the many ways to generate new style images at each iteration. Importantly, at each iteration, the previous image serves as a style image. This allows for the artist to interact with a computational version of his past work, a key aspect computational creativity has to offer.
Also in Figure 12, the photographic portrait is an input in the first canvas. The subsequent machine only uses as content or style image the preceding paintings. The observed divergence is hence an intertwined responsibility between the painter and the algorithms.
Neural style transfer algorithms are computationally creative in the sense that they may produce new images with an aesthetic that can significantly differ from what a painter would do. In order to turn this creativity into artworks, we have specified various painting experiments on a real canvas between a painter and outputs from these algorithms. We now report how these attempts shed light on a few aspects of the computational creativity of neural style transfer algorithms and cast them, in this specific setting, as computational catalysts to human creativity. Besides, the interactive painting process itself was designed to embody some questions related to computational creativity and to human-machine interplay, which has arguably become a major societal theme.
Computational Creativity and Catalyst.
The initial motivation for designing human-neural-style-transfer interactive experiments was to create a single object out of many different style transfer outputs, focusing here on a painting instead of a printed version of the numerical output. This echoes other creative works with machines where some artists playfully worded themselves as editors of the machine creativity, see for instance the rationale surrounding the last A.I. assisted musical album Chain Tripper of the band YACHT (2019). Though during our painting experiments, the intertwining between the machine’s outputs and painter interpretation was non-trivial since the painter was altering the machine suggestions. The painter felt the outputs were giving new style directions, wording them as computational catalysts to his own creativity.
In these interactions with algorithms, we exploit the ability of style transfer methods to produce outputs based on the previous works of the painter. This is a simple yet powerful idea that allows an artist to interact with computationally influenced versions of its own (past) work. This was felt by the painter as a semi-extraneous interpretation of his past techniques, allowing him to rediscover some elements of his old practice in a surprising way. Besides our specific framework, this seems to be another major benefit and specificity of computational creativity.
Importantly also in these portrait paintings, the artist could not see, except in the beginning, the real photographic portrait. We purposely designed it in this way so that the painting practice could embody the fact of perceiving the world only through the machines’ lens. This has important societal echoes; for instance, the issues raised by the so-called fake news stems from generative algorithms capacities, on a technical point of view, but, on the societal point of view, from our increasingly resorting to numerical pieces of information as a way to perceive the world. Here we hence implicitly explore what a painter felt when relying only on machine outputs to see the portraits.
Alternatively, it also gives another perspective on computationally creative algorithms, as offering new inspirational spaces to portray. Indeed we may not only explore algorithms outputs through printed versions, pretty much as we do not capture nature only through photography. Computationally creative outputs may hence be thought of as new types of landscapes for painters to capture.
Note also that the transient essence of these computational landscapes has very different rules than that of Nature; by erasing the content files or algorithms outputs, the painting could remain the only imprint of the machine outputs. This again is a specificity of computational creativity, when framed as a theme creator for artists, that is worth exploring.
Designing Human-Machine painting processes.
A major aspect in these human-machine interactive processes is that we engineered the numerical outputs out in the real world, rather than having a painter to interact with machines on a numerical tablet for instance.
Indeed when the painting process materializes in a numerical tablet, it strongly constrains the painter’s sensations; he does not feel the brushstrokes’ gesture, the canvas is not perceived in the full-dimensional space, etc. Even with interactive experiments on a real-canvas, the painter felt some processes as being too intrusive or constraining, like the experiment reported in Figure 12 which forces the artist to follow unusual rules for creating. This highlights that computational creativity, when considered in such a human-machine interplay, is notably conditioned by the current state in the engineering of such interactive systems. For instance how much projection is less intrusive than a robotic arm?
This level of machine’s intrusion is inherently linked with how the computational creativity of the algorithms is perceived, notably concerning the creative agency that is attributed to the machine outputs. Part of the discussion around computational creativity may hence be tightly related to some artists’ feelings of losing a share of the creative agency when algorithms become more than a disposable tool.
So it appears that when engineering such systems, there are typically two directions in the interfacing, either the machine goes out of the numerical world or reversely, the human interacts with the machine in the numerical world. And as we described previously, the interfacing puts the human artists in very different situations. However, this may not only be considered as a limitation as each constraint forces the painter to embody what we may feel in our daily interaction with machines. In particular then, each type of interfacing echoes, and may advocate then, a different societal relation humans have with machines; at the era of machines, it is primal to explore many such experiments.
Plastic point of view.
These interactive painting experiments were also designed to explore pictorial aspects.
For instance the photographic portrait that initiate the series in Figure 13-12 were in black and white. However, the style transfer algorithm and the painter were not constrained in the grey-scale space. The painter could observe in projected outputs of the machine, or conversely initiate in the real canvas, the emergence of the colours. For instance, in Figure 13, red appears on the eyebrows while in Figure 12 the colours are intended as variations of shade, which only exists through the machine.
While this is interesting from the creation point of view, it is also from the observer who is concerned about agency attribution. For a given aspect of the painting, like colours, did the painter simply repeat the machine colourization outputs, re-interpreted it or even started it? This reinforces the importance, in an exhibition, of algorithms’ outputs as testimonies of the final artworks.
We present interactive painting experiments between neural style transfer outputs and a painter. It reveals many potential benefits of leveraging computational creativity in this type of interactive framework and questions some computation aspects of neural style transfer specifically.
The authors would like to thanks Tomas Angles for accidentally giving a new direction to our work; Fabienne Colin for her enthusiasm in exploring the machine style transfer for editing her own painting style, as well as pointing us to the series of Monet; Thibault Séjourné for helpful discussion on Optimal Transport; Vivien Cabannes and John Zarka for rereading and Stephane Mallat for an interesting discussion. Finally Thomas would like to thanks Sebastian Pokutta for hosting him at the Zuse Institute in Berlin, where part of this work was carried out.
Dialog on a canvas with a machine.
NeuriIPS 2019 Workshop on Machine Learning for Creativity and Design, Note: https://neurips2019creativity.github.io/doc/Dialog%20on%20a%20canvas%20with%20a%20machine.pdf Cited by: Interactive Painting Experiments.
- Structure-preserving neural style transfer. IEEE Transactions on Image Processing 29, pp. 909–920. Cited by: Evaluating Neural Style Transfer Methods.
- Drawing operations, 2015. Cited by: Interactive Painting Experiments.
- Sinkhorn distances: lightspeed computation of optimal transport. In Advances in neural information processing systems, pp. 2292–2300. Cited by: Quantitative evaluation.
- How do humans sketch objects?. ACM Trans. Graph. (Proc. SIGGRAPH) 31 (4), pp. 44:1–44:10. Cited by: Pixelizing portrait construction..
- Synthesizing programs for images using reinforced adversarial learning. arXiv preprint arXiv:1804.01118. Cited by: Pixelizing portrait construction..
Controlling perceptual factors in neural style transfer.
Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3985–3993. Cited by: Evaluating Neural Style Transfer Methods.
- A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576. Cited by: Figure 1, Introduction, Evaluating Neural Style Transfer Methods.
- Exploring the structure of a real-time, arbitrary neural artistic stylization network. arXiv preprint arXiv:1705.06830. Cited by: Instability phenomena, Evaluating Neural Style Transfer Methods.
- Generative adversarial nets. In Advances in neural information processing systems, pp. 2672–2680. Cited by: Pixelizing portrait construction..
- Characterizing and improving stability in neural style transfer. In Proceedings of the IEEE International Conference on Computer Vision, pp. 4067–4076. Cited by: Instability phenomena.
- Arbitrary style transfer in real-time with adaptive instance normalization. In Proceedings of the IEEE International Conference on Computer Vision, pp. 1501–1510. Cited by: Figure 1, Figure 7, Instability phenomena, Evaluating Neural Style Transfer Methods.
- Neural style transfer: A review. CoRR abs/1705.04058. External Links: Cited by: Quantitative evaluation.
Perceptual losses for real-time style transfer and super-resolution. In European conference on computer vision, pp. 694–711. Cited by: Instability phenomena, Evaluating Neural Style Transfer Methods.
- The quick, draw!-ai experiment. Mount View, CA, accessed Feb 17. Cited by: Pixelizing portrait construction..
- Gardner’s art through the ages: the western perspective, volume 2. Cited by: Qualitative evaluation.
- Style transfer by relaxed optimal transport and self-similarity. External Links: Cited by: Figure 1, Qualitative evaluation, Quantitative evaluation, Quantitative evaluation, Evaluating Neural Style Transfer Methods, Evaluating Neural Style Transfer Methods.
- A content transformation block for image style transfer. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10032–10041. Cited by: Evaluating Neural Style Transfer Methods.
- Precomputed real-time texture synthesis with markovian generative adversarial networks. In European conference on computer vision, pp. 702–716. Cited by: Instability phenomena, Evaluating Neural Style Transfer Methods.
- Universal style transfer via feature transforms. In Advances in neural information processing systems, pp. 386–396. Cited by: Figure 1, Figure 7, Instability phenomena, Evaluating Neural Style Transfer Methods.
- Decoder network over lightweight reconstructed feature for fast semantic style transfer. In Proceedings of the IEEE International Conference on Computer Vision, pp. 2469–2477. Cited by: Evaluating Neural Style Transfer Methods.
- Wasserstein style transfer. External Links: Cited by: Evaluating Neural Style Transfer Methods, Evaluating Neural Style Transfer Methods.
- Neural painters: a learned differentiable constraint for generating brushstroke paintings. arXiv preprint arXiv:1904.08410. Cited by: Pixelizing portrait construction..
PyTorch: an imperative style, high-performance deep learning library. In Advances in Neural Information Processing Systems, pp. 8024–8035. Cited by: Quantitative evaluation.
- Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv preprint arXiv:1701.08893. Cited by: Instability phenomena, Evaluating Neural Style Transfer Methods, Evaluating Neural Style Transfer Methods.
- A style-aware content loss for real-time hd style transfer. In Proceedings of the European Conference on Computer Vision (ECCV), pp. 698–714. Cited by: Quantitative evaluation.
- Stabilized sparse scaling algorithms for entropy regularized transport problems. SIAM Journal on Scientific Computing 41 (3), pp. A1443–A1481. Cited by: Quantitative evaluation.
- Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: Evaluating Neural Style Transfer Methods.
- Texture networks: feed-forward synthesis of textures and stylized images.. Cited by: Instability phenomena, Evaluating Neural Style Transfer Methods.
Artist agent: a reinforcement learning approach to automatic stroke generation in oriental ink painting. IEICE TRANSACTIONS on Information and Systems 96 (5), pp. 1134–1144. Cited by: Pixelizing portrait construction..
- Chain tripping. Cited by: Computational Creativity and Catalyst..
The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 586–595. Cited by: Quantitative evaluation.
- Multimodal style transfer via graph cuts. Cited by: Evaluating Neural Style Transfer Methods.
- Multimodal style transfer via graph cuts. In Proceedings of the IEEE International Conference on Computer Vision, pp. 5943–5951. Cited by: Figure 1, Figure 7.