The problem of generating photo-realistic images from sampled noise or conditioning on other inputs such as images, texts or labels has been heavily investigated. In spite of recent progress of deep generative models such as PixelCNN , VAE  and GANs , generating high-resolution images remains a difficult task. This is mainly because modeling the distribution of pixels is difficult and the trained models easily introduce blurry components and artifacts when the dimensionality becomes high. Several approaches have been proposed to alleviate the problem, usually by leveraging multi-scale training [5, 6] or incorporating prior information .
In addition to the general image synthesis problem, the task of image inpainting can be described as: given an incomplete image as input, how do we fill in the missing parts with semantically and visually plausible contents. We are interested in this problem for several reasons. First, it is a well-motivated task for a common scenario where we may want to remove unwanted objects from pictures or restore damaged photographs. Second, while purely unsupervised learning may be challenging for large inputs, we show in this work that the problem becomes more constrained and tractable when we train in a multi-stage self-supervised manner and leverage the high-frequency information in the known region.
Context-encoder  is one of the first works that apply deep neural networks for image inpainting. It trains a deep generative model that maps an incomplete image to a complete image using reconstruction loss and adversarial loss. While adversarial loss significantly improves the inpainting quality, the results are still quite blurry and contain notable artifacts. In addition, we found it fails to produce reasonable results for larger inputs like 512x512 images, showing it is unable generalize to high-resolution inpainting task. More recently,  improved the result by using dilated convolution and an additional local discriminator. However it is still limited to relatively small images and holes due to the spatial support of the model.
Yang et al.  proposes to use style transfer for image inpainting. More specifically, it initializes the hole with the output of context-encoder, and then improves the texture by using style transfer techniques  to propagate the high-frequency textures from the boundary to the hole. It shows that matching the neural features not only transfers artistic styles, but can also synthesize real-world images. The approach is optimization-based and applicable to images of arbitrary sizes. However, the computation is costly and it takes long time to inpaint a large image.
Our approach overcomes the limitation of the aforementioned methods. Being similar to , we decouple the inpainting process into two stages: inference and translation. In the inference stage, we train an Image2Feature network that initializes the hole with coarse prediction and extract its features. The prediction is blurry but contains high-level structure information in the hole. In the translation stage, we train a Feature2Image network that transforms the feature back into a complete image. It refines the contents in the hole and outputs a complete image with sharp and realistic texture. Its main difference with  is that, instead of relying on optimization, we model texture refinement as a learning problem. Both networks can be trained end-to-end and, with the trained models, the inference can be done in a single forward pass, which is much faster than iterative optimizations.
To ease the difficulty of training the Feature2Image network, we design a “patch-swap” layer that propagates the high-frequency texture details from the boundary to the hole. The patch-swap layer takes the feature map as input, and replaces each neural patch inside the hole with the most similar patch on the boundary. We then use the new feature map as the input to the Feature2Image network. By re-using the neural patches on the boundary, the feature map contains sufficient details, making the high-resolution image reconstruction feasible.
We note that by dividing the training into two stages of Image2Feature and Feature2Image greatly reduces the dimensionality of possible mappings between input and output. Injecting prior knowledge with patch-swap further guides the training process such that it is easier to find the optimal transformation. When being compared with the GL inpainting , we generate sharper and better inpainting results at size 256x256. Our approach also scales to higher resolution (i.e. 512x512), which GL inpainting fails to handle. As compared with neural inpainting , our results have comparable or better visual quality in most examples. In particular, our synthesized contents blends with the boundary more seamlessly. Our approach is also much faster.
The main contributions of this paper are: (1) We design a learning-based inpainting system that is able to synthesize missing parts in a high-resolution image with high-quality contents and textures. (2) We propose a novel and robust training scheme that addresses the issue of feature manipulation and avoids under-fitting. (3) We show that our trained model can achieve performance comparable with state-of-the-art and generalize to other tasks like style transfer.
2 Related Work
Image generation with generative adversarial networks (GANs) has gained remarkable progress recently. The vanilla GANs  has shown promising performance to generate sharp images, but training instability makes it hard to scale to higher resolution images. Several techniques have been proposed to stabilize the training process, including DCGAN , energy-based GAN , Wasserstein GAN (WGAN) [13, 14], WGAN-GP , BEGAN , LSGAN  and the more recent Progressive GANs 
. A more relevant task to inpainting is conditional image generation. For example, Pix2Pix, Pix2Pix HD  and CycleGAN  translate images across different domains using paired or unpaired data. Using deep neural network for image inpainting has also been studied in [26, 8, 9, 27, 1].
Our patch-swap can be related to recent works in neural style transfer. Gatys et al.  first formulates style transfer as an optimization problem that combines texture synthesis with content reconstruction. As an alternative, [29, 30, 2] use neural-patch based similarity matching between the content and style images for style transfer. Li and Wand  optimize the output image such that each of the neural patch matches with a similar neural patch in the style image. This enables arbitrary style transfer at the cost of expensive computation.  proposes a fast approximation to  where it constructs the feature map directly and uses an inverse network to synthesize the image in feed-forward manner.
3.1 Problem Description
We formalize the task of image inpainting as follows: suppose we are given an incomplete input image , with and representing the missing region (the hole) and the known region (the boundary) respectively. We would like to fill in with plausible contents and combine it with as a new, complete image . Evaluating the quality of inpainting is mostly subject to human perception but ideally, should meet the following criteria: 1. It has sharp and realistic-looking textures; 2. It contains meaningful content and is coherent with and 3. It looks like what appears in the ground truth image (if available). In our context, can be either a single hole or multiple holes. It may also come with arbitrary shape, placed on a random location of the image.
3.2 System Overview
Our system divides the image inpainting tasks into three steps:
Inference: We use an Image2Feature network to fill an incomplete image with coarse contents as inference and extract a feature map from the inpainted image.
Matching: We use patch-swap on the feature map to match the neural patches from the high-resolution boundary to the hole with coarse inference.
Translation: We use a Feature2Image network to translate the feature map to a complete image.
The entire pipeline is illustrated in Fig. 3.
We introduce separate steps of training the Image2Feature and Feature2Image network. For illustration purpose, we assume the size of is 256x256x3 and the hole has size 128x128.
Inference: Training Image2Feature Network
The goal of the Image2Feature network is to fill in the hole with coarse prediction. During training, the input to the Image2Feature translation network is the 256x256x3 incomplete image and the output is a feature map of size 64x64x256. The network consists of an FCN-based module , which consists of a down-sampling front end, multiple intermediate residual blocks and an up-sampling back end. is followed by the initial layers of the 19-layer VGG network . Here we use the filter pyramid of the VGG network as a higher-level representation of images similar to . At first, is given as input to which produces a coarse prediction of size 128x128. is then embedded into forming a complete image , which again passes through the VGG19 network to get the activation of relu3_1 as . has size 64x64x256. We also use an additional PatchGAN discriminator
, the down-sampling front-end consists of three convolutional layers, and each layer has stride 2. The intermediate part has 9 residual blocks stacked together. The up-sampling back-end is the reverse of the front-end and consists of three transposed convolution with stride 2. Every convolutional layer is followed by batch normalization[ioffe2015batch]
and ReLu activation, except for the last layer which outputs the image. We also use dilated convolution in all residual blocks. Similar architecture has been used in for image synthesis and  for inpainting. Different from , we use dilated layer to increase the size of receptive field. Comparing with , our receptive field is also larger given we have more down-sampling blocks and more dilated layers in residual blocks.
During training, the overall loss function is defined as:
are the weighted masks yielding the loss to be computed only on the hole of the feature map. We also assign higher weight to the overlapping pixels between the hole and the boundary to ensure the composite is coherent. The weights of VGG19 network are loaded from the ImageNet pre-trained model and are fixed during training.
The adversarial loss is based on Generative Adversarial Networks (GANs) and is defined as:
We use a pair of images as input to the discriminator. Under the setting of adversarial training, the real pair is the incomplete image and the original image , while the fake pair is and the prediction .
To align the absolute value of each loss, we set the weight and respectively. We use Adam optimizer for training. The learning rate is set as and and the momentum is set to 0.5.
Match: Patch-swap Operation
Patch-swap is an operation which transforms into a new feature map . The idea is that the prediction is blurry, lacking many of the high-frequency details. Intuitively, we would like to propagate the textures from onto but still preserves the high-level information of . Instead of operating on directly, we use as a surrogate for texture propagation. Similarly, we use and to denote the region on corresponding to and on . For each 3x3 neural patch of overlapping with , we find the closest-matching neural patch in based on the following cross-correlation metric:
Suppose the closest-matching patch of is , we then replace with . After each patch in is swapped with its most similar patch in , overlapping patches are averaged and the output is a new feature map . We illustrate the process in Fig. 3.
Measuring the cross-correlations for all the neural patch pairs between the hole and boundary is computationally expensive. To address this issue, we follow similar implementation in  and speed up the computation using paralleled convolution. We summarize the algorithm as following steps. First, we normalize and stack the neural patches on and view the stacked vector as a convolution filter. Next, we apply the convolution filter on . The result is that at each location of we get a vector of values which is the cross-correlation between the neural patch centered at that location and all patches in . Finally, we replace the patch in with the patch in of maximum cross-correlation. Since the whole process can be parallelized, the amount of time is significantly reduced. In practice, it only takes about 0.1 seconds to process a 64x64x256 feature map.
Translate: Training Feature2Image Translation Network
The goal of the Feature2Image network is to learn a mapping from the swapped feature map to a complete and sharp image. It has a U-Net style generator which is similar to , except the number of hidden layers are different. The input to is a feature map of size 64x64x256. The generator has seven convolution blocks and eight deconvolution blocks, and the first six deconvolutional layers are connected with the convolutional layers using skip connection. The output is a complete 256x256x3 image. It also consists of a Patch-GAN based discriminator for adversarial training. However different from the Image2Feature network which takes a pair of images as input, the input to is a pair of image and feature map.
A straightforward training paradigm is to use the output of the Image2Feature network as input to the patch-swap layer, and then use the swapped feature to train the Feature2Image model. In this way, the feature map is derived from the coarse prediction and the whole system can be trained end-to-end. However, in practice, we found that this leads to poor-quality reconstruction with notable noise and artifacts (Sec. 4). We further observed that using the ground truth as training input gives rise to results of significantly improved visual quality. That is, we use the feature map as input to the patch-swap layer, and then use the swapped feature to train the Feature2Image model. Since is not accessible at test time, we still use
as input for inference. Note that now the Feature2Image model trains and tests with different types of input, which is not a usual practice to train a machine learning model.
Here we provide some intuition for this phenomenon. Essentially by training the Feature2Image network, we are learning a mapping from the feature space to the image space. Since is the output of the Image2Feature network, it inherently contains a significant amount of noise and ambiguity. Therefore the feature space made up of has much higher dimensionality than the feature space made up of . The outcome is that the model easily under-fits , making it difficult to learn a good mapping. Alternatively, by using , we are selecting a clean, compact subset of features such that the space of mapping is much smaller, making it easier to learn. Our experiment also shows that the model trained with ground truth generalizes well to noisy input at test time. Similar to , we can further improve the robustness by sampling from both the ground truth and Image2Feature prediction.
The overall loss for the Feature2Image translation network is defined as:
The reconstruction loss is defined on the entire image between the final output and the ground truth :
The adversarial loss is given by the discriminator and is defined as:
The real and fake pair for adversarial training are () and ().
When training the Feature2Image network we set and . For the learning rate, we set and . Same as the Image2Feature network, the momentum is set to 0.5.
3.4 Multi-scale Inference
Given the trained models, inference is straight-forward and can be done in a single forward pass. The input successively passes through the Image2Feature network to get and , then the patch-swap layer (), and then finally the Feature2Image network (). We then use the center of and blend with as the output.
Our framework can be easily adapted to multi-scale. The key is that we directly upsample the output of the lower scale as the input to the Feature2Image network of the next scale (after using VGG network to extract features and apply patch-swap). In this way, we will only need the Image2Feature network at the smallest scale to get and . At higher scales we simply set and let (Fig. 4). Training Image2Feature network can be challenging at high resolution. However by using the multi-scale approach we are able to initialize from lower scales instead, allowing us to handle large inputs effectively. We use multi-scale inference on all our experiments.
4.1 Experiment Setup
We separately train and test on two public datasets: COCO  and ImageNet CLS-LOC . The number of training images in each dataset are: 118,287 for COCO and 1,281,167 for ImageNet CLS-LOC. We compare with content aware fill (CAF) , context encoder (CE) , neural patch synthesis (NPS)  and global local inpainting (GLI) . For CE, NPS, and GLI, we used the public available trained model. CE and NPS are trained to handle fixed holes, while GLI and CAF can handle arbitrary holes. To fairly evaluate, we experimented on both settings of fixed hole and random hole. For fixed hole, we compare with CAF, CE, NPS, and GLI on image size 512x512 from ImageNet test set. The hole is set to be 224x224 located at the image center. For random hole, we compare with CAF and GLI, using COCO test images resized to 256x256. In the case of random hole, the hole size ranges from 32 to 128 and is placed anywhere on the image. We observed that for small holes on 256x256 images, using patch-swap and Feature2Image network to refine is optional as our Image2Feature network already generates satisfying results most of the time. While for 512x512 images, it is necessary to apply multi-scale inpainting, starting from size 256x256. To address both sizes and to apply multi-scale, we train the Image2Feature network at 256x256 and train the Feature2Image network at both 256x256 and 512x512. During training, we use early stopping, meaning we terminate the training when the loss on the held-out validation set converges. On our NVIDIA GeForce GTX 1080Ti GPU, training typically takes one day to finish for each model, and test time is around 400ms for a 512x512 image.
Quantitative Comparison Table 1 shows numerical comparison result between our approach, CE , GLI  and NPS . We adopt three quality measurements: mean error, SSIM, and inception score . Since context encoder only inpaints 128x128 images and we failed to train the model for larger inputs, we directly use the 128x128 results and bi-linearly upsample them to 512x512. Here we also compute the SSIM over the hole area only. We see that although our mean error is higher, we achieve the best SSIM and inception score among all the methods, showing our results are closer to the ground truth by human perception. Besides, mean error is not an optimal measure for inpainting, as it favors averaged colors and blurry results and does not directly account for the end goal of perceptual quality.
|Method||Mean Error||SSIM||Inception Score|
Visual Result Fig. 9 shows our comparison with GLI  in random hole cases. We can see that our method could handle multiple situations better, such as object removal, object completion and texture generation, while GLI’s results are noisier and less coherent. From Fig. 10, we could also find that our results are better than GLI most of the time for large holes. This shows that directly training a network for large hole inpainting is difficult, and it is where our “patch-swap” can be most helpful. In addition, our results have significantly fewer artifacts than GLI. Comparing with CAF, we can better predict the global structure and fill in contents more coherent with the surrounding context. Comparing with CE, we can handle much larger images and the synthesized contents are much sharper. Comparing with NPS whose results mostly depend on CE, we have similar or better quality most of the time, and our algorithm also runs much faster. Meanwhile, our final results improve over the intermediate output of Image2Feature. This demonstrates that using patch-swap and Feature2Image transformation is beneficial and necessary.
User Study To better evaluate and compare with other methods, we randomly select 400 images from the COCO test set and randomly distribute these images to 20 users. Each user is given 20 images with holes together with the inpainting results of NPS, GLI, and ours. Each of them is asked to rank the results in non-increasing order (meaning they can say two results have similar quality). We collected 399 valid votes in total found our results are ranked best most of the time: in 75.9% of the rankings our result receives highest score. In particular, our results are overwhelmingly better than GLI, receiving higher score 91.2% of the time. This is largely because GLI does not handle large holes well. Our results are also comparable with NPS, ranking higher or the same 86.2% of the time.
Comparison Comparing with , not only our approach is much faster but also has several advantages. First, the Feature2Image network synthesizes the entire image while  only optimizes the hole part. By aligning the color of the boundary between the output and the input, we can slightly adjust the tone to make the hole blend with the boundary more seamlessly and naturally (Fig. 10). Second, our model is trained to directly model the statistics of real-world images and works well on all resolutions, while  is unable to produce sharp results when the image is small. Comparing with other learning-based inpainting methods, our approach is more general as we can handle larger inputs like 512x512. In contrast,  can only inpaint 128x128 images while  is limited to 256x256 images and the holes are limited to be smaller than 128x128.
Ablation Study For the Feature2Image network, we observed that replacing the deconvolutional layers in the decoder part with resize-convolution layers resolves the checkerboard patterns as described in  (Fig. 5 left). We also tried only using loss instead of perceptual loss, which gives blurrier inpainting (Fig. 5 middle). Additionally, we experimented different activation layers of VGG19 to extract features and found that relu3_1 works better than relu2_1 and relu4_1.
We may also use iterative inference by running Feature2Image network multiple times. At each iteration, the final output is used as input to VGG and patch-swap, and then again given to Feature2Image network for inference. We found iteratively applying Feature2Image improves the sharpness of the texture but sometimes aggregates the artifacts near the boundary.
For the Image2Feature network, an alternative is to use vanilla context encoder  to generate as the initial inference. However, we found our model produces better results as it is much deeper, and leverages the fully convolutional network and dilated layer.
As discussed in Sec. 3.3, an important practice to guarantee successful training of the Feature2Image network is to use ground truth image as input rather than using the output of the Image2Feature network. Fig. 5 also shows that training with the prediction from the Image2Feature network gives very noisy results, while the models trained with ground truth or further fine-tuned with ground-truth and prediction mixtures can produce satisfying inpainting.
Our framework can be easily applied to real-world tasks. Fig. 6 shows examples of using our approach to remove unwanted objects in photography. Given our network is fully convolutional, it is straight-straightforward to apply it to photos of arbitrary sizes. It is also able to fill in holes of arbitrary shapes, and can handle much larger holes than .
The Feature2Image network essentially learns a universal function to reconstruct an image from a swapped feature map, therefore can also be applied to other tasks. For example, by first constructing a swapped feature map from a content and a style image, we can use the network to reconstruct a new image for style transfer. Fig. 7 shows examples of using our Feature2Image network trained on COCO towards arbitrary style transfer. Although the network is agnostic to the styles being transferred, it is still capable of generating satisfying results and runs in real-time. This shows the strong generalization ability of our learned model, as it’s only trained on a single COCO dataset, unlike other style transfer methods.
Our approach is very good at recovering a partially missing object like a plane or a bird (Fig. 10). However, it can fail if the image has overly complicated structures and patterns, or a major part of an object is missing such that Image2Feature network is unable to provide a good inference (Fig. 8).
We propose a learning-based approach to synthesize missing contents in a high-resolution image. Our model is able to inpaint an image with realistic and sharp contents in a feed-forward manner. We show that we can simplify training by breaking down the task into multiple stages, where the mapping function in each stage has smaller dimensionality. It is worth noting that our approach is a meta-algorithm and naturally we could explore a variety of network architectures and training techniques to improve the inference and the final result. We also expect that similar idea of multi-stage, multi-scale training could be used to directly synthesize high-resolution images from sampling.
This work was supported in part by the ONR YIP grant N00014-17-S-FO14, the CONIX Research Center, one of six centers in JUMP, a Semiconductor Research Corporation (SRC) program sponsored by DARPA, the Andrew and Erna Viterbi Early Career Chair, the U.S. Army Research Laboratory (ARL) under contract number W911NF-14-D-0005, Adobe. The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
-  Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and Locally Consistent Image Completion. ACM Transactions on Graphics (Proc. of SIGGRAPH 2017) 36(4) (2017) 107:1–107:14
-  van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., et al.: Conditional image generation with pixelcnn decoders. In: Advances in Neural Information Processing Systems. (2016) 4790–4798
-  Kingma, D.P., Welling, M.: Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114 (2013)
-  Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., Bengio, Y.: Generative adversarial nets. In: Advances in neural information processing systems. (2014) 2672–2680
-  Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., Metaxas, D.: Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. arXiv preprint arXiv:1612.03242 (2016)
-  Denton, E.L., Chintala, S., Fergus, R., et al.: Deep generative image models using a￼ laplacian pyramid of adversarial networks. In: Advances in neural information processing systems. (2015) 1486–1494
-  Nguyen, A., Yosinski, J., Bengio, Y., Dosovitskiy, A., Clune, J.: Plug & play generative networks: Conditional iterative generation of images in latent space. arXiv preprint arXiv:1612.00005 (2016)
-  Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., Efros, A.A.: Context encoders: Feature learning by inpainting.
-  Yang, C., Lu, X., Lin, Z., Shechtman, E., Wang, O., Li, H.: High-resolution image inpainting using multi-scale neural patch synthesis. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR). (July 2017)
Li, C., Wand, M.:
Combining markov random fields and convolutional neural networks for image synthesis.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 2479–2486
-  Radford, A., Metz, L., Chintala, S.: Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434 (2015)
-  Zhao, J., Mathieu, M., LeCun, Y.: Energy-based generative adversarial network. arXiv preprint arXiv:1609.03126 (2016)
-  Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X.: Improved techniques for training gans. In: Advances in Neural Information Processing Systems. (2016) 2234–2242
-  Arjovsky, M., Chintala, S., Bottou, L.: Wasserstein gan. arXiv preprint arXiv:1701.07875 (2017)
-  Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.: Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028 (2017)
-  Berthelot, D., Schumm, T., Metz, L.: Began: Boundary equilibrium generative adversarial networks. arXiv preprint arXiv:1703.10717 (2017)
-  Mao, X., Li, Q., Xie, H., Lau, R.Y., Wang, Z., Smolley, S.P.: Least squares generative adversarial networks. arXiv preprint ArXiv:1611.04076 (2016)
-  Karras, T., Aila, T., Laine, S., Lehtinen, J.: Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196 (2017)
-  : Gans comparison without cherry-picking. https://github.com/khanrc/tf.gans-comparison Accessed: 2017-10-29.
-  Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., Lee, H.: Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396 (2016)
Kim, J., Kwon Lee, J., Mu Lee, K.:
Accurate image super-resolution using very deep convolutional networks.In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 1646–1654
-  Dong, C., Loy, C.C., He, K., Tang, X.: Learning a deep convolutional network for image super-resolution. In: European Conference on Computer Vision, Springer (2014) 184–199
-  Ledig, C., Theis, L., Huszár, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., et al.: Photo-realistic single image super-resolution using a generative adversarial network. arXiv preprint arXiv:1609.04802 (2016)
-  Isola, P., Zhu, J.Y., Zhou, T., Efros, A.A.: Image-to-image translation with conditional adversarial networks. arXiv preprint arXiv:1611.07004 (2016)
-  Zhu, J.Y., Park, T., Isola, P., Efros, A.A.: Unpaired image-to-image translation using cycle-consistent adversarial networks. arXiv preprint arXiv:1703.10593 (2017)
-  Yeh, R.A., Chen, C., Lim, T.Y., Schwing, A.G., Hasegawa-Johnson, M., Do, M.N.: Semantic image inpainting with deep generative models. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 5485–5493
-  Wang, W., Huang, Q., You, S., Yang, C., Neumann, U.: Shape inpainting using 3d generative adversarial network and recurrent convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2017) 2298–2306
-  Gatys, L.A., Ecker, A.S., Bethge, M.: A neural algorithm of artistic style. arXiv preprint arXiv:1508.06576 (2015)
-  Elad, M., Milanfar, P.: Style transfer via texture synthesis. IEEE Transactions on Image Processing 26(5) (2017) 2338–2351
-  Frigo, O., Sabater, N., Delon, J., Hellier, P.: Split and match: Example-based adaptive patch sampling for unsupervised style transfer. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 553–561
-  Chen, T.Q., Schmidt, M.: Fast patch-based style transfer of arbitrary style. arXiv preprint arXiv:1612.04337 (2016)
-  Barnes, C., Shechtman, E., Finkelstein, A., Goldman, D.B.: Patchmatch: A randomized correspondence algorithm for structural image editing. ACM Trans. Graph. 28(3) (2009) 24–1
-  Barnes, C., Shechtman, E., Goldman, D.B., Finkelstein, A.: The generalized patchmatch correspondence algorithm. In: European Conference on Computer Vision, Springer (2010) 29–43
-  Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
-  Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., Catanzaro, B.: High-resolution image synthesis and semantic manipulation with conditional gans. arXiv preprint arXiv:1711.11585 (2017)
-  Zhang, R., Isola, P., Efros, A.A., Shechtman, E., Wang, O.: The unreasonable effectiveness of deep features as a perceptual metric. arXiv preprint arXiv:1801.03924 (2018)
-  Gatys, L.A., Ecker, A.S., Bethge, M.: Image style transfer using convolutional neural networks. In: Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, IEEE (2016) 2414–2423
-  Johnson, J., Alahi, A., Fei-Fei, L.: Perceptual losses for real-time style transfer and super-resolution. In: European Conference on Computer Vision, Springer (2016) 694–711
-  Dosovitskiy, A., Brox, T.: Generating images with perceptual similarity metrics based on deep networks. In: Advances in Neural Information Processing Systems. (2016) 658–666
-  Zheng, S., Song, Y., Leung, T., Goodfellow, I.: Improving the robustness of deep neural networks via stability training. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. (2016) 4480–4488
-  Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, Springer (2014) 740–755
-  Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. International Journal of Computer Vision 115(3) (2015) 211–252
-  Odena, A., Dumoulin, V., Olah, C.: Deconvolution and checkerboard artifacts. Distill (2016)
-  Iizuka, S., Simo-Serra, E., Ishikawa, H.: Globally and locally consistent image completion. ACM Transactions on Graphics (TOG) 36(4) (2017) 107