Over the recent years, neural networks (NNs) have become state of the art in processing various kinds of data types, such as, e. g., high-dimensional numerical data , images , time series , or language data [1, 2, 3, 4, 5]
, only to name a few. Standard applications are classification or regression tasks, and in many cases NNs are oußtperforming classical approaches significantly. Also in the field of anomaly detection[6, 7, 8, 9] and object recognition [10, 11, 12, 13, 14, 15, 16], NNs have been proven to be powerful approaches.
. Therein, two networks – the generator and the discriminator – compete with each other in a way that the generator learns to generate synthetic data that exhibits the specific properties and characteristics of the training data. A similar task can also be performed by variational autoencoders.
Although the mathematical background of NNs has been known for decades, some of the biggest development steps and successful applications have been presented only in the recent years. These breakthroughs are mainly due to two reasons: On the one hand, we are today equipped with the required computational power, especially in the form of GPUs, in order to perform the network training on a reasonable time scale. On the other hand, substantial knowledge about network architectures has been developed, e. g. concerning convolutional (CNN) or recurrent (RNN) neural networks.
A special field of application which could not be thought of without the progress mentioned above is that of style transfer networks [23, 24, 25, 26, 27, 28, 29, 30, 31, 32]. These algorithms have been developed with special regard to the data type of digital images. Their task is to translate an image of a certain domain (e. g. a photo) into the style of a different domain (e. g. an artistic painting). This problem can be faced from different perspectives and in the literature, one can find methods using direct optimization procedures , methods working on paired images , and approaches performing unpaired image translations [25, 26, 27] (see also Sec. 2). Unpaired frameworks have the advantage that they do not require one-to-one training examples from both domains, which are often not easily available or do not even exist. Instead, they use a special kind of NN arrangement in an extended GAN setting which makes it possible to train the translation using unpaired image examples.
Unpaired domain translation settings are also subject of this paper and we focus on the special case of high-resolution images by which we understand the several to high megapixel regime. This is in contrast to many previous machine learning papers on image tasks which have demonstrated applications on publicly available data sets. Typical representatives of these image data sets are MNIST ( pixels), CIFAR10 ( pixels), CALTEC101 (about pixels), or others with typical image resolutions on the order of some hundred to some ten thousands of pixels altogether.
However, these data sets do by far not reach resolutions that are typical for today’s camera systems. Even simple smartphone cameras easily reach the double-digit megapixel regime and they can record videos in Full HD (about 2 megapixels). Video systems with 4K resolution (about 8 megapixels) are commercially available and today’s standard of DSLR cameras is on the order of 20 megapixels and above. Today, there is a clear demand for such highly resolved images to capture relevant image details. As an example, video systems developed for the use case of autonomous driving work on high-resolution images to provide a sufficiently detailed view of the car’s surrounding. These numbers demonstrate the clear need to develop machine learning algorithms that are capable to handle today’s high-resolution image data.
In this paper, we address the problem of unpaired domain translation with special regard to this issue of being able to process high-resolution images. We discuss that current methods suffer from a high peak memory consumption during training and translation which sets a natural limit to the largest processable image size on a given GPU. To solve this issue, we introduce a scalable method which is able to work on arbitrary-high resolutions without increasing the peak memory consumption of the NN. We achieve this goal by the simple idea, not to process the whole image at once but to train and apply the domain translation on the level of small, overlapping image subsamples. For the training of the underlying generators, each of the existing methods can be applied, and we use a Unit-like framework for our investigations in this paper.
A question arising with the task of image translation is how the styles of the domains and
are defined. Since there is a high variability in real-world data samples, NNs are usually trained on huge data sets representing this high variance. By contrast, there can also be the need to handle the opposite case of low variance data: An example is the case that one of the domains corresponds to simulated images. Such simulated images often work on textures with low spacial variability. As a consequence, different images in the domain change their content but only hardly vary in their appearance. This question is closely related to our goal of developing a high-resolution style transfer, in the sense that one large image with all its details may contain a variance that is similar to the one of a big data set of small images. This reflects the fact that the actual relevant measure of the data set used for training is less the size in terms of megapixel but rather in the sense of its Shannon entropy.
Our paper is organized as follows: In Sec. 2, we provide a brief overview of some selected style transfer algorithms and discuss some of their advantages and shortcomings. In Sec. 3, we introduce our method and explain how we work on the level of image subsamples. Results on high-resolution images are presented in Sec. 4 with examples covering the range from similar to different domains. We demonstrate that our method works well even for “single-shot” translations, where the target style is defined by only one image, and we present results obtained from images with up to more than 50 megapixels resolution. After concluding in Sec. 5, we provide additional information like the NN’s architectural design and image details in the appendix.
2 Related Work
As mentioned in the introduction, there are several approaches in the literature to perform domain translation on images. Each of them has its advantages and shortcomings and we provide a brief overview on a selection of methods in this section. (We refer the reader to the respective publications and references therein for details).
One of the very early approaches to unplaired style transfer is the method of Gatys et al. . Their approach is based on a single, pretrained multi-layer CNN which takes the image to be transferred as input. Going deeper and deeper into this CNN, the filters in each layer are activated by different image properties, such as e. g. colors, structures, local and global features. Based on the idea that images with a similar style should lead to similar activation patterns across the layers of the CNN, they proceed as follows: Given a target style image and its corresponding activations, the layer activations are also determined for the image to be translated. From the differences in the activations, they determine the gradient with respect to the input image and apply this information to change the image. By this way, the input image resembles more and more to the desired target. An advantage is that this method yields high-quality images and that – by setting different gradient weights with respect to different layer depths of the CNN – the style can be adjusted to cover more local or global features. A drawback of this method, however, is that it requires an optimization procedure for each image transformation, which makes this procedure computationally very expensive.
A second approach to style transfer of images it the Pix2Pix framework by Isola et al. . Their method is based on a NN in an encoder-decoder configuration whose latent space covers the relevant features of the images which are translated. Compared to the method of Gatys et al., the advantage of this approach is that, once the network is trained, it can be applied directly to new images without additional optimization steps. This makes the evaluation phase significantly less expensive concerning the computational requirements. However, a shortcoming of this method is that for the network training, paired images of both domains and are required, which are often not available in real-world applications.
The much more challenging problem of domain translations on unpaired image settings has been addressed by the CycleGAN  and Unit  frameworks. Both of them are based on extended GAN settings and apply the crucial requirement of cycle consistency for training. In the unpaired setting, a direct transformation from domain to domain is not possible, since for an image in there is no counterpart in . Instead, the transformations and are performed with the goal to reproduce the respective original images. In both these frameworks, each of the translations and is applied by a generator made of a deep CNN in encoder-decoder arrangement, and there are two separate discriminators for each domain and which distinguish real and fake images. The most important difference between the two frameworks is that the generators in CycleGAN are completely independent from each other, while they share part of their latent space in Unit. Both these frameworks have shown to yield very good results. A challenge of these frameworks, however, is the huge overall network size, which consists of four different CNNs (two generators and two discriminators; see Tab. 1 in the appendix). This results in a substantial computational effort and long computational times for training and evaluation. In addition, it is required to store all four networks and at least some intermediate network results for training on the GPU which sets additional hardware requirements.
The latter point is directly related to our main goal of processing high-resolution images, so that we go into this in more detail: We have discovered from our investigations of e. g. the Unit
framework, that storing an image in the MB range on the GPU is, of course, not a problem. However, propagating the image as float tensors through the network and backing up intermediate results for loss functions or gradients for backpropagation, the peak memory consumption of the whole network is significantly higher, with an especially large contribution from the deep hidden layers. We illustrate this observation in Fig.1, which shows from a threshold a linear increase of the peak memory consumption with the input image’s number of pixels. From the vertical lines, which indicate the GPU memory hardware limit of two GPUs, it becomes clear that this framework cannot be executed e. g. on a standard Nvidia Quadro M2000 for images with more that about one megapixel and the limit on a Nvidia GTX 1080Ti is less than three megapixel. Even if code improvements might reduce the memory consumption, this basic limit to the processable image resolution remains, and also special high-prize computing GPUs which provide higher GPU memory can only push this boundary but cannot overcome it.
It is the purpose of this paper to introduce an approach by which arbitrary-high resolution images can be processed on today’s standard GPUs. The basic idea is simple and can be stated in one sentence: Instead of processing the whole image at once, perform the network training as well as its evaluation on small subsamples of the image. From a mathematical point of view, the justification of this procedure is the analogous functional principle of the CNN’s filters which stride over the input image: With a usual size of three to seven pixels, these filters are very small compared to the actual image and they only see a very small part of it in every step. Our procedure of extracting subsamples can, therefore, be regarded as an abstract intermediate interface to the NN which works on a level between the whole image size and the filter size.
In more detail, the procedure is described in the following two subsections and illustrated in Fig. 2.
To train an unpaired domain translation network for high-resolution images, we start from a training set consisting of a single or several such images and extract a batch out of the high-resolution image as shown in Fig. 2(a): According to the desired batch size , we extract image subsamples of different size out of the original image. Both, position and size of each subsample can be chosen arbitrarily and all extracted samples are then scaled down to a common resolution
Including color channels, the respective batch tensor then has the size
With this tensor, we perform a single training iteration of the underlying unpaired domain translation network as illustrated in Fig. 2(b) (e. g. a CycleGAN  or Unit  framework). This step of sample extraction and update iteration is repeated until the algorithm has converged or another stopping criterion is reached.
Analogously to the training procedure, the evaluation phase is performed on the level of small image subsamples. As illustrated in Fig. 2(c), in a first step samples are extracted from the image that shall be transformed. Second, each of them is translated to the other domain by the generator separately. Finally, the translated images are merged together to the translated high-resolution image.
The mechanism to train and evaluate NNs for the domain translation task described above leaves some freedom in the actual application. Therefore, we want to extend this scheme by some remarks:
First, for high-resolution images of pixel size and extracted small batches of size , the number of possible, different image subsamples is
Taking as an example an image of size pixels and working on subsamples of size , the value on the left-hand side of Eq. (3) is almost 14 million. Further taking into account the different sizes of the extracted samples and possible horizontal or vertical image flips, this number becomes even larger. This makes clear that, by this procedure, one single high-resolution image can effectively act as a huge training set. Using random positions and sizes, it is also very unlikely that the NN sees exactly the same image more than once during the whole training process, which prevents overfitting.
Second, in the extraction phase during training, we use random positions over the whole image and random sizes in the range between the small batch () and the whole image (). This corresponds to different zoom levels of the image and, hence, we cover different length scales of the image as well as help the generator to learn, both, global and local image properties. The different zoom levels of the exracted images can also be interpreted as effectively changing the distance from camera to object which helps the generator to generalize better along the optical axis of the camera in addition to the axes perpendicular to it.
Third, an advantage of only processing small subsamples is that the peak memory consumption during the training and evaluation phase is set by the subsample size , but not by the size of the full high-resolution image. Larger images “only” lead to larger computation times during the evaluation phase, because more subsamples need to be processed, but they do not increase the GPU memory requirements. Nevertheless, it is, of course, possible to fully parallellize the processing of the image subsamples if more computational resources are available.
Each training batch can be extracted from only one or, of course, also from several independent images.
In the translation phase, we use overlapping subsamples and use the average color value for each pixel from the different samples. Our experiments have shown that this improves image quality by reducing noise and avoiding that neighboring samples show discontinuities of objects at their borders.
Let us finally remark that, due to the evaluation level on subsamples, images sizes in the two domains can be chosen independently. It is also possible to train models on different image sizes than those which they are evaluated on later. Of course, a trained generator can be applied to further images.
4 Results and Discussion
In this section, we present results of the domain translation obtained with different high-resolution image sizes and styles. In all cases, the original and transformed images are too large to be properly presented in a pdf document, so that we have downscaled them to a reasonable file size. To provide a detailed view on some images, we present characteristic image details in the appendix.
Concerning the style of the images, we present different domain adaption ranges, meaning that in some images, the domains and are very different and in some others they are very similar. All domain translations in this paper have been performed in a low-variance “one-shot” setting, in which both domains are defined by only one high-resolution image.
All images in this paper have been translated by the procedure described in Sec. 3 with subsample size of pixels. Moreover, a Unit-like GAN-setting  with shared latent space and a configuration as listed in Tab. 1 in the appendix has been used. In order to underline the small memory consumption of the procedure, we emphasize that we have trained and evaluated all the NNs for the images shown in this paper on a “small” standard Desktop GPU Nvidia Quadro M2000 with only 4GB GPU memory.
As a first example to demonstrate the performance of the presented method, we show in Fig. 4 the domain translation of a panorama image taken in the Swiss Alps towards the style of the Scottish Highlands, which are very different-looking image domains. The original image has a resolution of pixels (more than 50 megapixels altogether) and the target style is defined by an image with pixels (about 20 megapixels). We see that the target style is well adopted by the translation on both local and global length scales. The clear blue sky is transformed throughout to a cloudy one, and grass as well as rocks receive the style of the brownish Scottish Highland landscape. Interestingly, only some snow fields are interpreted as water while others also are translated to brown earth, which nicely demonstrates that the NN not only repaints the different areas but takes into account its surrounding and meaning. Also, the image details are very well preserved as we show by the subsamples presented in the appendix (see Fig. 8). Even single trees, small paths, and also blades of grass keep their structure.
A second panorama image style transfer is presented in Fig. 4 showing a street scenery. With this example, we address the situation of very similar domains and . Both the original as well as the target image show the same street, but the images have been taken from different positions and under different weather and lighting conditions. The original image is of size pixels (about 45 megapixels) and the target image has a resolution of pixels. Also in this case of similar styles, the procedure performs well, and it preserves, both, local and global image information. Image details are, again, provided in Fig. 8 in the appendix.
Figure 5 shows a street scene as well, but with more complex image content. Beyond the street, grass and trees, this image contains also a sidewalk, more complex road markings, a car, traffic lights and signs, and some houses in the background. Here, both the original and the target image have the same resolution of pixels (about 20 megapixels). Again, we focus on the case of rather similar domains and take as target an image showing the same street, but photographed shortly before sunset and from another position, whereas the source was shot in bright daylight. Again, the image is well transformed to the target style and contains all details which define the scene. (See Fig. 8 in the appendix for image details.)
In Fig. 7, we demonstrate once more that our method is capable of dealing with larger domain differences. Therefore, we show the cross-translation of a street and a dirt road, both images having a size of pixels (about 20 megapixels). The top row of this figure shows the original photographs and the bottom row shows the translated images in the domain that is defined by the respective other photograph. The generator has learned to transform the asphalt street into a dirty and stony surface, while the grass structure and the sky keep their structure while being only translated in color. (See Fig. 8 in the appendix for image details.)
We finish the results focussing on cross-translations of traffic signs in Fig. 7. Here, we show two groups of traffic sign images, respectively, on the left and on the right. The top row shows the original images and the bottom row the translated ones. Here, the images in the top row are assigned pairwise to the domains and and in the bottom row, the same traffic signs are shown in the domain, which is defined by the respective other image. Each of the translations has been performed on images of size pixels (about 6 megapixel). In all the cases, the structure of the images and the content is well translated. The only exception is that the colors yellow and green of the traffic light symbol are not preserved (left column). However, this is clear since the color yellow does not occur in the target domain (second column). From this perspective, this observation is in agreement with the goal to reach the target domain, but it also makes clear how important the choice of the target data set is for the translation task.
We note that, despite the huge resolution that has finally been processed, training time of the whole GAN setting was on the order of only one day even on the small Nvidia Quadro GPU. One of our general observations is also that using smaller image subsamples helps to improve the local microstructure in the translated image while larger subsamples rather keep global information. Finally, we want to note that we have verified the generalization of our procedure by training the NN only on a smaller part of the high-resolution image, but finally transforming the whole one. By this procedure, there were areas in the big image which the NN has never seen during training. In all our test cases, the procedure generalized very well with no visible artifacts or mis-translations as long as no crucial image content had been cut off for the training process.
In this paper, we have introduced a method which allows to perform unpaired domain translation on high-resolution images. It is based on the idea not to process the whole image at once but to apply the training and evaluation on subsamples of random size (and/or position) which were downscaled to a fixed small image size. With this method, we were able to create high-quality domain translations which fulfill micro- and macro-consistency, and preserve image details well. We performed training and evaluation of the GAN setting on a “small” standard desktop GPU which underlines that high-resolution domain transfer does not require large and expensive GPU hardware or clusters. We have applied the method to various domain translations covering the range from very similar to very different domains.
We see potential applications of this procedure especially in the case of high-quality and high-resolution (test) data generation for different use cases as, e. g., autonomous driving.
We generally propose to apply the idea of processing data with NNs not as a whole but on (random) overlapping subsamples also for different kinds of data types and in other application fields. For example, one might use similar approaches in the field of 3-dimensional objects or meshes [33, 34, 35] where parts of the object could be processed separately. Another field can be graph data [36, 37, 38, 39] where one could work on single subgraphs instead of the whole big one.
Neural Network Architecture
|Generator||Filter size (num), Norm, Activ.|
|Down-Convolution||(64), –, LeakyReLU|
|Down-Convolution||(128), –, LeakyReLU|
|Down-Convolution||(256), –, LeakyReLU|
|Down-Convolution||(512), –, LeakyReLU|
(512), Inst.-Norm, ReLU
|Residual (, shared)||(512), Inst.-Norm, ReLU|
|Residual ()||(512), Inst.-Norm, ReLU|
|Up-Convolution||(256), –, LeakyReLU|
|Up-Convolution||(128), –, LeakyReLU|
|Up-Convolution||(64), –, LeakyReLU|
|Up-Convolution||(3), –, Tanh|
|Discriminator||Filter size (num), Norm, Activ.|
|Down-Convolution||(64), –, LeakyReLU|
|Down-Convolution||(128), –, LeakyReLU|
|Down-Convolution||(256), –, LeakyReLU|
|Down-Convolution||(512), –, LeakyReLU|
|Down-Convolution||(1), –, LeakyReLU|
K. Jones and J. Galliers.
Evaluating natural language processing systems: An analysis and review.Comput. Linguistics, 24, 1995.
-  R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, 2011.
-  H. Martinez, Y. Bengio, and G. Yannakakis. Learning deep physiological models of affect. IEEE Comput. Intell. Mag., 8:20–33, 2013.
-  E. Cambria and A. Hussain. Sentic Computing: Techniques, Tools, and Applications. Springer-Verlag, 2012.
S. Lawrence, C. Giles, and S. Fong.
Natural language grammatical inference with recurrent neural networks.IEEE Trans. Knowledge. Data Eng., 12:126–140, 2000.
A. Taylor, S. Leblanc, and N. Japkowicz.
Anomaly detection in automobile control network data with long short-term memory networks.In
2016 IEEE International Conference on Data Science and Advanced Analytics (DSAA), pages 130–139, 2016.
-  Mayu Sakurada and Takehisa Yairi. Anomaly detection using autoencoders with nonlinear dimensionality reduction. In Proceedings of the MLSDA 2014 2Nd Workshop on Machine Learning for Sensory Data Analysis, MLSDA’14, pages 4:4–4:11, New York, NY, USA, 2014. ACM.
Zimek Arthur, Schubert Erich, and Kriegel Hans-Peter.
A survey on unsupervised outlier detection in high-dimensional numerical data.Statistical Analysis and Data Mining: The ASA Data Science Journal, 5(5):363–387, 2012.
Sarah M. Erfani, Sutharshan Rajasegarar, Shanika Karunasekera, and Christopher
High-dimensional and large-scale anomaly detection using a linear one-class svm with deep learning.Pattern Recognition, 58:121 – 134, 2016.
-  P. F. Felzenszwalb, R. B. Girshick, D. Mcallester, and D. Ramanan. Object detection with discriminatively trained part-based models. IEEE Trans. Pattern Anal. Mach. Intell., 32:1627, 2010.
P. Viola and M. J. Jones.
Robust real-time object detection.
Int. J. of Comput. Vision, 57:87, 2001.
K. K. Sung and T. Poggio.
Example-based learning for view-based human face detection.IEEE Trans. Pattern Anal. Mach. Intell., 20:39–51, 2002.
-  E. Ohn-Bar and M. M. Trivedi. To boost or not to boost? on the limits of boosted trees for object detection. ICPR, 2016.
-  C. Wojek, P. Dollar, B. Schiele, and P. Perona. Pedestrian detection: An evaluation of the state of the art. IEEE Trans. Pattern Anal. Mach. Intell., 34:743, 2012.
-  H. Kobatake and Y. Yoshinaga. Detection of spicules on mammogram based on skeleton analysis. IEEE Trans. Med. Imag., 15:235–245, 1996.
-  X. Bai, X. Wang, L. J. Latecki, W. Liu, and Z. Tu. Active skeleton for non-rigid object detection. ICCV, 2010.
-  Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, and Aaron Courville amd Yoshua Bengio. Generative adversarial networks. abs/1406.2661, 2014.
-  E. L. Denton, S. Chintala, R. Fergus, Arthur Szlam, and Rob Fergus. Deep generative image models using a laplacian pyramid of adversarial network. NIPS, pages 1486–1494, 2015.
-  A. Radford, L. Metz, and S. Chintala. Unsupervised representation learning with deep convolutional generative adversarial networks. 2015.
-  T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen. Improved techniques for training gans. 2016.
-  J. Zhao, M. Mathieu, and Y. LeCun. Energy-based generative adversarial network. 2016.
-  D. P. Kingma and M. Welling. Auto-encoding variational bayes. ICLR, 2014.
Leon A. Gatys, Alexander S. Ecker, and Matthias Bethge.
Image Style Transfer Using Convolutional Neural Networks.volume 1857, pages 1–15. Springer, 2000.
-  Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, and Alexei A. Efros. Image-to-image translation with conditional adversarial networks. CoRR, abs/1611.07004, 2016.
-  Jun-Yan Zhu, Taesung Park, Phillip Isola, and Alexei A Efros. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Computer Vision (ICCV), 2017 IEEE International Conference on, 2017.
-  Ming-Yu Liu, Thomas Breuel, and Jan Kautz. Unsupervised image-to-image translation networks. CoRR, abs/1703.00848, 2017.
-  Xun Huang, Ming-Yu Liu, Serge Belongie, and Jan Kautz. Multimodal Unsupervised Image-to-image Translation. In ECCV, 2018.
-  Xun Huang and Serge J. Belongie. Arbitrary style transfer in real-time with adaptive instance normalization. CoRR, abs/1703.06868, 2017.
-  Artsiom Sanakoyeu, Dmytro Kotovenko, Sabine Lang, and Björn Ommer. A style-aware content loss for real-time hd style transfer. abs/1807.10201, 2018.
-  Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, and Bryan Catanzaro. High-resolution image synthesis and semantic manipulation with conditional gans. CoRR, abs/1711.11585, 2017.
-  Yijun Li, Ming-Yu Liu, Xueting Li, Ming-Hsuan Yang, and Jan Kautz. A closed-form solution to photorealistic image stylization. CoRR, abs/1802.06474, 2018.
-  Christian Ledig, Lucas Theis, Ferenc Huszar, Jose Caballero, Andrew P. Aitken, Alykhan Tejani, Johannes Totz, Zehan Wang, and Wenzhe Shi. Photo-realistic single image super-resolution using a generative adversarial network. CoRR, abs/1609.04802, 2016.
-  Jiajun Wu, Chengkai Zhang, Tianfan Xue, William T Freeman, and Joshua B Tenenbaum. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. In Advances in Neural Information Processing Systems, pages 82–90, 2016.
-  Chiyu ‘Max’ Jiang and Philip Marcus. Hierarchical detail enhancing mesh-based shape generation with 3d generative adversarial network. 2017.
-  Angel X. Chang, Thomas A. Funkhouser, Leonidas J. Guibas, Pat Hanrahan, Qi-Xing Huang, Zimo Li, Silvio Savarese, Manolis Savva, Shuran Song, Hao Su, Jianxiong Xiao, Li Yi, and Fisher Yu. Shapenet: An information-rich 3d model repository. CoRR, abs/1512.03012, 2015.
-  Hanjun Dai, Bo Dai, and Le Song. Discriminative embeddings of latent variable models for structured data. CoRR, abs/1603.05629, 2016.
-  Hanjun Dai, Elias B. Khalil, Yuyu Zhang, Bistra Dilkina, and Le Song. Learning combinatorial optimization algorithms over graphs. CoRR, abs/1704.01665, 2017.
-  William L. Hamilton, Rex Ying, and Jure Leskovec. Representation learning on graphs: Methods and applications. CoRR, abs/1709.05584, 2017.
-  Aditya Grover and Jure Leskovec. Node2vec: Scalable feature learning for networks. In Proceedings of the 22Nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 855–864, New York, NY, USA, 2016. ACM.