Log In Sign Up

An Efficient Explorative Sampling Considering the Generative Boundaries of Deep Generative Neural Networks

Deep generative neural networks (DGNNs) have achieved realistic and high-quality data generation. In particular, the adversarial training scheme has been applied to many DGNNs and has exhibited powerful performance. Despite of recent advances in generative networks, identifying the image generation mechanism still remains challenging. In this paper, we present an explorative sampling algorithm to analyze generation mechanism of DGNNs. Our method efficiently obtains samples with identical attributes from a query image in a perspective of the trained model. We define generative boundaries which determine the activation of nodes in the internal layer and probe inside the model with this information. To handle a large number of boundaries, we obtain the essential set of boundaries using optimization. By gathering samples within the region surrounded by generative boundaries, we can empirically reveal the characteristics of the internal layers of DGNNs. We also demonstrate that our algorithm can find more homogeneous, the model specific samples compared to the variations of ϵ-based sampling method.


page 2

page 4

page 6

page 7


An Introduction to Deep Generative Modeling

Deep generative models (DGM) are neural networks with many hidden layers...

Adversarial score matching and improved sampling for image generation

Denoising score matching with Annealed Langevin Sampling (DSM-ALS) is a ...

Automatic Correction of Internal Units in Generative Neural Networks

Generative Adversarial Networks (GANs) have shown satisfactory performan...

A step towards procedural terrain generation with GANs

Procedural terrain generation for video games has been traditionally bee...

An Unsupervised Way to Understand Artifact Generating Internal Units in Generative Neural Networks

Despite significant improvements on the image generation performance of ...

Deep Generative Image Models using a Laplacian Pyramid of Adversarial Networks

In this paper we introduce a generative parametric model capable of prod...

A Deep Generative Deconvolutional Image Model

A deep generative model is developed for representation and analysis of ...

1 Introduction

The primary objective of a generative model is to generate realistic data. Recently proposed adversarial training schemes, such as generative adversarial networks (GANs), have exhibited remarkable performance not only in terms of the quality of each instance but also the diversity of the generated data. Despite those improvements, the generation mechanism inside the generative models is not well-studied.

In general, a generative model maps a point in the latent space to a sample in the data space. In other words, data instances are embedded as latent vectors in a perspective of the trained generative model. A latent space is divided by boundaries derived from the structure of the model, where the vectors in the space represent the generation information according to which side of boundaries they are placed. We utilize these characteristics to examine the generation mechanism of the model.

When we select an internal layer and a latent vector in the DGNNs, there exists the corresponding region which is established by a set of boundaries. Samples in this region have the same activation pattern and deliver similar generation information to the next layer. The details of the delivered information can be identified indirectly by comparing the generated outputs from these samples. Given a DGNN trained to generate human faces, for example, if we identify the region in which samples share a certain hair color but vary in others characteristics (eye, mouth, etc.), such a region would be related to the generation of the same hair color.

However, it is non-trivial to obtain samples from the region with desired properties of DGNNs because (1) thousands of generative boundaries are involved in the generation mechanism and (2) a linear modification in the input dimension may cause a highly non-linear change in the internal units and the output. Visiting the previous example again, there may exist regions with different hair colors, distinct attributes, or their combinations. Furthermore, a small linear modification of the vector in the latent space may change the entire output [23]. To overcome this difficulty, an efficient algorithm to identify the appropriate region and explore the space are necessary.

In this paper, we propose an efficient, explorative sampling algorithm to reveal the characteristics of the internal layer of DGNNs. Our algorithm consists of two steps: (1) to handle a large number of boundaries in DGNNs, our algorithm approximates the set of critical boundaries of the query which is the given latent vector using Bernoulli dropout approach [3]; (2) then our algorithm efficiently obtains samples which share same attributions as the query in a perspective of the trained DGNNs by expanding the tree-like exploring structure [10] until it reaches the boundaries of the region.

The advantages of our algorithm are twofold: (1) it can guarantee sample acceptance in high dimensional space where the rejection sampling based on the Monte Calro method easily fails when the region area is unknown; (2) it can handle sampling strategy in a perspective of the model where the commonly used -based sampling [4] is not precise to obtain samples considering complex non-spherical generative boundaries [9]. We experimentally verify that our algorithm obtains more consistent samples compared to -based sampling methods on deep convolutional GANs (DCGAN) [18] and progressive growing of GANs (PGGAN) [7].

2 Related Work

Generative Adversarial Networks

The adversarial training between a generator and a discriminator has highly improved the quality and diversity of samples genereted by DGNNs [6]. Many generative models have been proposed to generate room images [18] and realistic human face images [7, 8]. Despite those improvements, the generation mechanisms of the GANs are not clearly analyzed yet. Recent results revealed that the relationship between the input latent space and the output data space in a trained GAN by showing a manipulation in the latent vectors changes attributes in the generated data [18, 27]. Generation roles of some neural nodes in a trained GAN are identified with the intervention technique [2].

Explaining deep neural networks

One can explain an output of neural networks by the sensitivity analysis, which aims to figure out which portion of an input contributes to the output. The sensitivity can be calculated by class activation probabilities

[26], relevance scores [14] or gradients [22]. DeconvNet [25], LIME [21] and SincNet [20] trains a new model to explain the trained model. Geometric analyis could also reveal the internal structure indirectly [15, 12, 5]. The activation maximization [4], or GANs [17] have been used to explain the neural network by using examples. Our method is an example-based explanation which brings a new geometric persprective to analyze DGNNs.

Geometric analysis on the inside of deep neural networks

Geometric analysis attempts to analyze the internal working process by relating the geometric properties, such as boundaries dividing the input space or manifolds along the boundaries, to the output of the model. The depth of a network with nonlinear activations was shown to contribute to the formation of boundary shape [15]

. This property makes complex, non-convex regions surrounded by boundaries derived by internal layers. Although such regions are complicated, each region for a single classification in DNN classifiers is shown to be topologically connected

[5]. It has also been shown that the manifolds learned by DNNs and distributions over them are highly related to the representation capability of a network [12].

Example-based explanation of the decision of the model

Activation maximization is one of example-based methods to visualize the preferred inputs of neurons in a layer and according patterns in hidden layers

[4]. The learned deep neural representation can be denoted by preferred inputs because it is related to the activation of specific neurons [17]. The reliability of examples for explanation also has been argued considering the connectivity among the justified samples [9].

3 Generative Boundary Aware Sampling in Deep Generative Neural Networks

This section presents our main contribution, the explorative generative boundary aware sampling (E-GBAS) algorithm, which can obtain samples sharing the identical attributes from the perspective of the DGNNs. Initially, we define the terms used in our algorithm. Then we explain E-GBAS which comprises of (1) an approximate representation of generative boundaries and (2) an efficient stochastic exploration to obtain samples in the complex, non-convex generative region.

3.1 Deep Generative Neural Networks

Although there are various architecture of DGNNs, we represent the DGNNs in a unified form. Given DGNNs with layers, the function of DGNNs model is decomposed into , where is a vector in the latent space . denotes the value of -th element and . In general, the operation

includes linear transformations and a non-linear activation function.

3.2 Generative Boundary and Region

The latent space of the DGNNs is divided by hypersurfaces learned during the training. The networks make the final generation based on these boundaries. We refer these boundaries as the generative boundaries.

Definition 1 (Generative Boundary (GB)).

The -th generative boundary at the -th layer is defined as

In general, there are numerous boundaries in the -th layer of the network and the configuration of the boundaries comprises the region. Because we are mainly interested in the region created by a set of boundaries, we denote the definition of halfspace which is a basic component of the region.

Definition 2 (Halfspace).

Let a halfspace indicator for the -th layer. Each element indicates either or both of two sides of the halfspace divided by the -th GB. We define the halfspace as

The region can be represented by the intersection of each halfspace in the -th layer. For the case where , the halfspace is defined as the entire latent space, so -th GB does not contribute to comprise the region.

Definition 3 (Generative Region (GR)).

Given a halfspace indicator in the -th layer, let the set of according halfspaces . Then the generative region is defined as

For a network with a single layer (

), the generative boundaries are linear hyperplanes. The generative region is constructed by those boundaries and appears as a convex polytope. However, if the layers are stacked with nonlinear activation functions (

), the generative boundaries are bent, so the generative region will have a complicated non-convex shape [15, 19].

3.3 Smallest Supporting Generative Boundary Set

Decision boundaries have an important role in classification task, as samples in the same decision region have the same class label. In the same context, we manipulate the generative boundaries and regions of the DGNNs.

Specifically, we want to collect samples that are placed in the same generative region and have identical attributes in a perspective of the DGNNs. To define this property formally, we first define the condition under which the samples share the neural representation.

Definition 4 (Neural Representation Sharing (NRS)).

Given a pair of latent vectors satisfies the neural representation sharing condition in -th layer if

It is practically challenging to find samples that satisfy the above condition, because a large number of generative boundaries exist in the latent space, as shown in Figure 2(fig:SSGBS_1). Various information represented by thousands of generative boundaries makes it difficult to identify which boundary is in charge of each piece of information. We relax the condition of the neural representation sharing by considering a set of the significant boundaries.

Definition 5 (Relaxed NRS).

Given a subset and a pair of latent vectors satisfies the relaxed neural representation sharing condition if

Then, we must select important boundaries for the relaxed NRS in the -th layer. We believe that not all nodes deliver important information for the final output of the model as some nodes could have low relevance of information [16]. Furthermore, it has been shown that a subset of features mainly contributes to the construction of outputs in GAN [2]. We define the smallest supporting generative boundary set (SSGBS), which minimizes the influences of minor (non-critical) generative boundaries.

Definition 6 (Smallest Supporting Generative Boundary Set).

Given the generator and a query , for -th layer and any real value , if there exists an indicator such that

and there is no where such that

then we denote a set as the smallest supporting generative boundary set (SSGBS).

In the same context, we denote the generative region that corresponds to the SSGBS as the smallest supporting generative region (SSGR).

(a) Original generative boundaries and generated digit image of the query
(b) Generative boundaries in the SSGBS and generated digit image of the query
Figure 2: Results of optimization of the Bernoulli parameter for the given arbitrary query in the trained DCGAN on MNIST with 2-D latent space[18, 11]. The red box denotes the generated digit image and the blue box denotes the magnified area nearby the query. (a) shows all generative boundaries in the first hidden layer (=1). (b) shows SSGBS after optimization with constraint .

It is impractical to explore all the combinations of boundaries to determine the optimal SSGBS, owing to the exponential combinatoric search space.111For example, a simple fully connected layer with outputs generates up to generative boundary sets. To avoid this problem, we used the Bernoulli dropout approach [3] to obtain the SSGBS. We define this dropout function as , where is an element-wise multiplication. We optimize

to minimize the loss function

, which quantifies the degradation of generated image with the sparsity of Bernoulli mask.


We iteratively update the parameter using gradient descent to minimize the loss function in Equation (3.3). Then we obtain the SSGBS from the optimized Bernoulli parameter with a proper threshold in the -th layer. For each iteration, we apply the element-wise multiplication between and sampled mask to obtain masked feature value and feed it to obtain the modified output.

Input: : a query, : a DGNN model,
: a target layer
Output: : the optimized Bernoulli parameter for SSGBS

1:  Initialize
3:  while not converge do
4:     Sample
6:     ,
7:     Compute loss
8:     Update with
9:  return
Algorithm 1 Bernoulli Mask Optimization (BerOpt)

After obtaining the optimal Bernoulli parameter , we first define an optimal halfspace indicator with the proper probability threshold (e.g., ). We set the value of elements in to zero for removing GBs which have minor contributions to the generation mechanism. That is,

where is indicator function. Representing SSGBS and SSGR is straightforward from the Definition 6 with . Figure 2 shows the generative boundaries and the generated digit of the SSGBS without and with the optimized Bernoulli parameter with . The generated digits indicate that the effect of the removal of minor generative boundaries on the output is not significant.

3.4 Explorative Generative Boundary Aware Sampling

After obtaining SSGR , we gather samples in the region and compare the generated outputs of them. Because the possesses a complicated shape, simple neighborhood sampling methods such as -based sampling cannot guarantee exploration inside the . To guarantee the relaxed NRS, we apply the GB constrained exploration algorithm inspired by the rapidly-exploring random tree (RRT) algorithm [10], which is invented for the robot path planning in complex configuration spaces. We refer to the modified exploration algorithm as generative boundary constrained RRT (GB-RRT). Figure 3(fig:RRT1_a) depicts the explorative trajectories of GB-RRT.

Input: : a query, : SSGR
Parameters: : a sampling interval,
: a maximum number of iterations, : a step size
Output: : samples in the SSGR

1:  Initialize queue
2:  for  do
3:     Sample
4:     ,)
6:     if and
7:     then
8:  return
Algorithm 2 Generative boundary constrained rapidly-exploring random tree (GB-RRT)
Figure 3: (a) Visualization of explorative trajectories of GB-RRT for a given query (red dot) in the first hidden layer () of DCGAN-MNIST and (b) generated outputs from uniform randomly chosen samples (blue dot in (a)). The red box denotes the generated output of the query.

We name the entire sample exploration process for DGNNs which is comprised of finding SSGBS in arbitrary layer and efficiently sampling in SSGR as explorative generative boundary aware sampling (E-GBAS).

Input: : a query, : DGNN model,
: a target layer, : threshold for SSGBS selection
Output: : a set of samples in the same SSGR of

1:  Optimize
2:  Compute     where
3:  Derive
4:  Sample a set
5:  return
Algorithm 3 Explorative generative boundary aware sampling (E-GBAS)

4 Experimental Evaluations

This section presents analytical results of our algorithm and empirical comparisons with variants of -based sampling method. We select three different DGNNs; (1) DCGAN [18] with the wasserstein distance [1] trained on MNIST, (2) PGGAN [7] trained on the church dataset of LSUN [24] and (3) the celebA dataset [13].

The -based sampling collects samples based on distance metric. We choose and distance as baseline, and sample in each -ball centered at the query. use In practice, the value of is manually selected. We use the set of accepted samples and rejected samples, and , obtained by the E-GBAS to set the for fair comparisons. We set the average of accepted samples which can represent the middle point of the SSGR, then we calculate with min/max distance between and as,

Figure 4 indicates the visualization of calculating in the DCGAN-MNIST. After is set, are determined to have the same volume as the -ball. Figure 5 shows the geometric comparisons of each sampling method in the first hidden layer () of DCGAN-MNIST.

Figure 4: (a) The accepted samples (black dots), rejected samples (red dots) and average of the accepted samples (blue dot) by E-GBAS. (b) Visualizing the selection of to make the area close to that of SSGR. The -balls of each distance. min (red), max (orange) and average of min/max distance (blue).
(a) E-GBAS
Figure 5: Geometric comparison of (a) E-GBAS, (b) and (c) -based sampling methods. Although the two -balls cover some area of the GR, they cannot cover all of the GR and have a possibility to include infeasible area. Whereas, the E-GBAS includes the only feasible area of GR for sampling.

4.1 Qualitative Comparison of E-GBAS and -based Sampling

We first demonstrate how the generated samples vary if they are inside or outside of the obtained GR. As shown in Figure 1, we mainly compare the samples generated from E-GBAS (blue region) to the samples from the -based sampling (red region). A given query and a target layer, E-GBAS explores the SSGR and obtains samples that satisfy the relaxed NRS. Figure 7 depicts the results of the generated images from E-GBAS and the -based sampling. We observed that the images generated by E-GBAS share more consistent attributes (e.g., composition of view and hair color) which is expected property of NRS. For example, in the first row of celebA results, we can identify the sampled images share the hair color and angle of face with different characteristics such as hair style. In LSUN dataset, the second row of results share the composition of buildings (right aligned) and the weather (cloudy).

We try to analyze the generative mechanism of DGNNs along the depth of layer by changing the target layer. Figure 6

shows the examples and the standard deviations of the generated images by E-GBAS in each target layer. From the results, we discover that the variation of images is more localized as the target layer is set to be deeper. We argue that the GB in the lower layer attempts to maintain an abstract and generic information (e.g., angle of scene/entire shape of face), while those in the deeper layer tends to retain a concrete and localized information (e.g., edge of wall/mustache).

(b) PGGAN-celebA
Figure 6: Examples of variations of generated images for each target layer. The first row shows the standard deviations of generated images for each target layer.
Layer # 1 2 3 4 2 4 6 2 4 6
-based sampling 0.0819 0.0711 0.0718 0.0343 0.4951 0.4971 0.4735 0.5150 0.4994 0.4892
-based sampling 0.0834 0.0722 0.0720 0.0344 0.4641 0.4322 0.3365 0.4859 0.4799 0.3384
E-GBAS 0.0781 0.0694 0.0675 0.0323 0.3116 0.2558 0.1748 0.2980 0.2789 0.1446
Table 1: Standard deviations of generated images in each sampling methods. The number indicates the index of layer that GB constraint is applied in each DGNNs, where higher number is close to the output generation. The E-GBAS shows the lowest standard deviations compared to -based sampling methods.

4.2 Quantitative Results

The Similarity of Activation Values in Discriminator

A DGNN with the adversarial training has a discriminator to measure how realistic is the output created from a generator. During the training, the discriminator learns features to judge the quality of generated images. In this perspective, we expect that generated outputs from samples which satisfy NRS have similar feature values in the internal layers of the discriminator. We use cosine similarity between feature values of samples and the query. The relative evaluations of NRS for each sampling method are calculated by the average of similarities. When we denote a discriminator

, the query and the obtained set of samples , the similarity of feature values in the -th layer is defined as the Equation (2). The operation consists of linear transformations and a non-linear activation function.


Table 2 shows the results of measuring the similarity for each internal layer in the discriminator.

Layer # 1 2 3 4


-based 0.722 0.819 0.864 0.991
-based 0.727 0.823 0.867 0.991
E-GBAS 0.747 0.838 0.878 0.992


-based 0.578 0.602 0.957 0.920
-based 0.551 0.613 0.960 0.946
E-GBAS 0.578 0.637 0.967 1.000


-based 0.678 0.718 0.785 0.963
-based 0.684 0.720 0.789 0.965
E-GBAS 0.702 0.733 0.804 0.970
Table 2: Comparisons of the average cosine similarity of feature values of the discriminator. The number indicates the index of layer in a discriminator.

Variations of Generated Image

To quantify the consistency in attributes of the generated images, we calculate the standard deviation of generated images sampled by E-GBAS and variants of the -based sampling. The standard deviation is calculated as Equation (3). The experimental results are shown in Table 1.


We randomly select 10 query samples and compute the average standard deviation of generated sets. Table 1 indicates that our E-GBAS has lower loss (i.e., consistent with the input query) compared to the -based sampling in all three models and target layers.

5 Conclusion

In this study, we propose the explorative algorithm for analyzing the GR to identify generation mechanism in the DGNNs. Especially, we probe the internal layer in the trained DGNNs without additional training by introducing the GB of DGNNs. To gather samples which satisfy the NRS condition in the complicated and non-convex GR, we applied GB-RRT. We empirically show that the collected samples in the latent space with the NRS condition share the same generative properties. We also qualitatively indicate that the NRS in the distinct layers implies different generative attributes. Furthermore, the concept of the proposed algorithm is general and can also be used to probe the decision boundary in the classifier. So we believe that our method can be extended to different types of deep neural networks.


This work was supported by the Institute for Information & communications Technology Planning & Evaluation (IITP) grant funded by the Ministry of Science and ICT (MSIT), Korea (No. 2017-0-01779, XAI) and the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT), Korea (NRF-2017R1A1A1A05001456).

Figure 7: Generated samples from a query input (left), -based sampling (middle) and E-GBAS sampling (right) of the three DGNNs (DCGAN-MNIST, PGGAN-LSUN and PGGAN-celebA.). We confirm that the generated images by E-GBAS have more consistent information compared to the -based sampling.


  • [1] M. Arjovsky, S. Chintala, and L. Bottou (2017) Wasserstein generative adversarial networks. In

    International Conference on Machine Learning

    pp. 214–223. Cited by: §4.
  • [2] D. Bau, J. Zhu, H. Strobelt, Z. Bolei, J. B. Tenenbaum, W. T. Freeman, and A. Torralba (2019) GAN dissection: visualizing and understanding generative adversarial networks. In International Conference on Learning Representations, Cited by: §2, §3.3.
  • [3] C. Chang, E. Creager, A. Goldenberg, and D. Duvenaud (2018) Explaining image classifiers by adaptive dropout and generative in-filling. arXiv preprint arXiv:1807.08024. Cited by: §1, §3.3.
  • [4] D. Erhan, A. Courville, and Y. Bengio (2010) Understanding representations learned in deep architectures. Department dInformatique et Recherche Operationnelle, University of Montreal, QC, Canada, Tech. Rep 1355, pp. 1. Cited by: §1, §2, §2.
  • [5] A. Fawzi, S. Moosavi-Dezfooli, P. Frossard, and S. Soatto (2018) Empirical study of the topology and geometry of deep networks. In

    IEEE Conference on Computer Vision and Pattern Recognition

    Cited by: §2, §2.
  • [6] I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio (2014) Generative adversarial nets. In Conference on Neural Information Processing Systems, pp. 2672–2680. Cited by: §2.
  • [7] T. Karras, T. Aila, and Laine (2015) Progressive growing of gans for improved quality, stability, and variation. arXiv preprint arXiv:1710.10196. Cited by: §1, §2, §4.
  • [8] T. Karras, S. Laine, and T. Aila (2019) A style-based generator architecture for generative adversarial networks. In IEEE Conference on Computer Vision and Pattern Recognition, pp. 4401–4410. Cited by: §2.
  • [9] T. Laugel, M. Lesot, C. Marsala, X. Renard, and M. Detyniecki (2019) The dangers of post-hoc interpretability: unjustified counterfactual explanations. arXiv preprint arXiv:1907.09294. Cited by: §1, §2.
  • [10] S. M. LaValle (1998) Rapidly-exploring random trees: a new tool for path planning. Technical report Vol. TR 98–11, Computer Science Department, Iowa State University. Cited by: §1, §3.4.
  • [11] Y. LeCun, C. Cortes, and C. Burges (2010) MNIST handwritten digit database. AT&T Labs [Online]. Available: http://yann. lecun. com/exdb/mnist 2, pp. 18. Cited by: Figure 2.
  • [12] N. Lei, Z. Luo, S. Yau, and D. X. Gu (2018)

    Geometric understanding of deep learning

    arXiv preprint arXiv:1805.10451. Cited by: §2, §2.
  • [13] Z. Liu, P. Luo, X. Wang, and X. Tang (2015) Deep learning face attributes in the wild. In International Conference on Computer Vision, Cited by: §4.
  • [14] G. Montavon, S. Lapuschkin, A. Binder, W. Samek, and K. Müller (2017) Explaining nonlinear classification decisions with deep taylor decomposition. Pattern Recognition 65, pp. 211–222. Cited by: §2.
  • [15] G. F. Montufar, R. Pascanu, K. Cho, and Y. Bengio (2014) On the number of linear regions of deep neural networks. In Conference on Neural Information Processing Systems, pp. 2924–2932. Cited by: §2, §2, §3.2.
  • [16] A. S. Morcos, D. G. Barrett, N. C. Rabinowitz, and M. Botvinick (2018) On the importance of single directions for generalization. arXiv preprint arXiv:1803.06959. Cited by: §3.3.
  • [17] A. Nguyen, A. Dosovitskiy, J. Yosinski, T. Brox, and J. Clune (2016) Synthesizing the preferred inputs for neurons in neural networks via deep generator networks. In Conference on Neural Information Processing Systems, pp. 3387–3395. Cited by: §2, §2.
  • [18] A. Radford, L. Metz, and S. Chintala (2015) Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv preprint arXiv:1511.06434. Cited by: §1, §2, Figure 2, §4.
  • [19] M. Raghu, B. Poole, J. Kleinberg, S. Ganguli, and J. S. Dickstein (2017) On the expressive power of deep neural networks. In International Conference on Machine Learning, pp. 2847–2854. Cited by: §3.2.
  • [20] M. Ravanelli and Y. Bengio (2018) Interpretable convolutional filters with sincnet. arXiv preprint arXiv:1811.09725. Cited by: §2.
  • [21] M. T. Ribeiro, S. Singh, and C. Guestrin (2016) Why should i trust you?: explaining the predictions of any classifier. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144. Cited by: §2.
  • [22] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra (2017) Grad-cam: visual explanations from deep networks via gradient-based localization. In IEEE International Conference on Computer Vision, pp. 618–626. Cited by: §2.
  • [23] C. Szegedy, W. Zaremba, I. Sutskever, J. Bruna, D. Erhan, I. Goodfellow, and R. Fergus (2013) Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199. Cited by: §1.
  • [24] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao (2015) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §4.
  • [25] M. D. Zeiler and R. Fergus (2014) Visualizing and understanding convolutional networks. In European conference on computer vision, pp. 818–833. Cited by: §2.
  • [26] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba (2016)

    Learning deep features for discriminative localization

    In IEEE conference on computer vision and pattern recognition, pp. 2921–2929. Cited by: §2.
  • [27] J. Zhu, P. Krähenbühl, E. Shechtman, and A. A. Efros (2016) Generative visual manipulation on the natural image manifold. In European Conference on Computer Vision, pp. 597–613. Cited by: §2.