Out-of-distribution (OoD) detection is an important topic in machine learning. In applications such as medical diagnosis, fraud detection, failure detection, and security, being able to detect OoD samples is crucial for safe and automated deployment of machine learning. In practice, it is often not known a priori what these abnormal distributions are, making OoD detection a difficult problem to solve. Since the models are often trained to give accurate predictions on a specific dataset, they are susceptible to making false predictions on OoD data. Deep neural networks tend to make false predictions with high confidence, due to well known calibration issuesGuo et al. (2017)
. This makes OoD a challenging task for deep learning.
Deep generative models aim to learn the underlying distribution of training data and generate images that mimic the training data. These two aspects of deep generative models bring forth two possible metrics for OoD detection: likelihood and reconstruction loss.
The underlying expectation of using reconstruction loss for OoD detection is that since generative models learn to generate images based on the in-distribution data, they should be able to reconstruct in-distribution data well. Thus, reconstruction loss of in-distribution images should be smaller than that of OoD images. Reconstruction loss based OoD detection methods have been explored with auto-encoders Zhou and Paffenroth (2017), variational auto-encoders An and Cho (2015) and generative adversarial networks (GAN) Zenati et al. (2018) Li et al. (2018). A weakness of using reconstruction loss is that the underlying expectation does not always hold: due to the large capacity in deep generative models, they are able to reconstruct image sets with smaller variance, even if these images are OoD. Thus, reconstruction loss is usually a poor metric for OoD detection.
Likelihood-based methods for OoD detection use the learned density and hence, only apply to generative models that provide likelihood estimates. In this sense, flow-based models are particularly attractive because of the tractability of exact log-likelihood for both input data and latent variables. A well calibrated model should assign higher likelihood for training data and lower likelihood for OoD data. However, with deep generative models, this is not always the case. A previous study by Nalisnick et al. (2018) showed that deep generative models such as Glow Kingma and Dhariwal (2018) and variational auto-encoder (VAE) assign higher likelihood to SVHN when trained on CIFAR-10 (Figure 3 (c, d)). This counter-intuitive behavior of the likelihood of deep generative models is a fundamental issue based on the second order analysis in Nalisnick et al. (2018). It turns out that the model would assign higher likelihood to OoD data if it has lower variance than training data in general, and there is no straightforward solution to this issue.
The weakness of both likelihood-based and reconstruction-based methods in the setting where OoD samples have lower variance than in-distribution samples calls for rethinking in the choice of deep generative models for OoD detection. We explore the use of neural rendering model (NRM), a recently proposed deep generative model Ho et al. (2018)
. It attempts to reverse the feed-forward process in a convolutional neural network (CNN). In NRM, uncertainty in the generation process is controlled by latent variableswith a structured prior distribution that captures the dependencies across the network layers. Unlike other generative models, NRM has an architecture that mirrors a CNN, making the latent variables potentially more informative about the image distribution. Also, we can employ state-of-art CNN architectures such as the ResNet and DenseNet to obtain good performance. In this paper, our contributions are as follows:
We unify reconstruction loss and likelihood based OoD methods into one framework.
The observed likelihood in NRM can be decomposed into two terms: reconstruction loss and joint likelihood of latent variables. Similarly, the latent likelihood can be decomposed into a reconstruction loss and joint likelihood of latent variables in the layers above. Thus, we have a wide choice of metrics that can be used for OoD detection and other downstream tasks.
We investigate the use of data likelihood , reconstruction loss, and latent variable likelihood as OoD detection metrics and compare their performances.
We propose using the joint likelihood of latent variables of the NRM as a robust metric for OoD detection.
The latent variables of NRM decide whether to render a pixel and translations of rendering templates at layer . The optimal values for latent variables are provided by and in the feed-forward CNN. A structured prior is introduced to capture the dependencies of latent variables across all the layers. Our experiment results show that is lower for OoD samples with smaller variance as shown in Figure 1 (a), indicating that this is a robust metric for OoD detection across different datasets. To the best of our knowledge, this is first likelihood metric for OoD detection in deep generative models that consistently assigns lower likelihood to lower variance OoD images. Additionally, we observe that distributions of visually similar image categories tend to overlap, suggesting that our method captures underlying structure of image data.
We show that reconstruction loss at certain latent layers is able to capture the difference between distributions.
We obtain the reconstruction loss at each level of NRM and find that at some intermediate layers, it is higher for OoD data than training data. However, the layer may vary for different datasets and model architectures, making reconstruction loss difficult to be used as a robust metric for OoD detection in practice.
We first introduce NRM by drawing a close connection between CNN. Then we show that NRM unifies likelihood and reconstruction loss based OoD detection methods by its likelihood decomposition. Finally we propose and compare the performances of three OoD detection metrics from NRM, pointing out that the joint likelihood for latent variable is a robust measure for OoD samples.
2.1 Neural Rendering Model
In the feed-forward process of standard CNN, information is gradually reduced to obtain the predicted label. Given the success of CNN in image classification tasks, we assume that a generative model with similar architectures to CNN is also suitable for fitting high dimensional distributions like images. NRM does so by generating images from rough to fine details using features learned from CNN as rendering templates. It introduces latent variables to model the uncertainty of rendering process and a structured prior to capture the dependencies between latent variable across layers. The graphical model of NRM is shown in Figure 2(a).
Let be the generated image, be object category. are the latent variables at layer , where defines translation of rendering templates based on the position of local maximum from , and decides whether to render a pixel or not based on whether it is activated () in the feed-foward CNN. is the intermediate rendered image at layer . and
denotes the value of corresponding vectors at pixel.
The rendering process from layer to layer is shown in Figure 2 (b). First, is elementwisely multiplied by so that only activated pixels in forward CNN are rendered in NRM. Then for each pixel, we do the following operations:
Multiply the rendering template by pixel value , where is the weight matrix at layer in CNN that contains the features learned from the data.
Pad the templates with zeros to match the size of . Let be the padding matrix.
Translate the template to the position of local maximum from . We use for the translation matrix.
Finally, add all padded and translated templates together to get intermediate rendered image at layer .
Mathematically, the generation process in NRM Ho et al. (2018) is as follows:
The dependencies among latent variables across different layers are captured by the structured prior
where , and corresponds the bias after convolutions in CNN.
Given all the ingredients in NRM, now we formally draw the connection between NRM and CNN in Theorem 1.
Theorem 1 (MAP Inference Ho et al. (2018)).
Given that intermediate rendered images are nonnegative, the joint maximum a posteriori (JMAP) inference of latent variable in NRM is the feed-forward step in CNN.
The main takeaway from this result is that given an input image, we can get its optimal latent representations in NRM by performing the feed-forward process in CNN. ReLU nonlinearities indicate the optimal value of . detects whether a feature exists in the image or not in CNN, and correspondingly determines whether or not to render a pixel in NRM. Maxpool operators indicate the optimal value of . locates features in CNN and determines the position of rendering templates in NRM. Intuitively, the generation process with optimal latent variables in NRM is very similar to a reversed CNN. Similar feedback and feed-forward connections have been observed in brain Friston (2018)
. Here, the nice connection between NRM and CNN roots from the structured joint distribution of latent variables, which we find to be a strong metric for OoD detection.
2.2 Unifying Likelihood and Reconstruction Loss Based Approaches
As a generative model, NRM can also be used to estimate probability distribution of input data . In this section, we first show that the log likelihood for NRM can be decomposed into two terms: reconstruction loss and joint distribution of latent variables. Then we propose several quantities that can be extracted from NRM and discuss their usefulness in the down-streaming task: OoD detection.
Let be the th input data. The rendered image at pixel level generated using mostly likely label and optimal latent variables is denoted by . The following theorem describes the likelihood decomposition for NRM.
Theorem 2 (Likelihood decomposition Ho et al. (2018)).
The lower bound of probability density of input can be approximated up to constant by the following when
The main take away from this theorem is as follows:
The first term is proportional to reconstruction loss between the the input image and generated image using the optimal latent variables.
The second term is log joint likelihood of latent variable, which is proportional to up to constant.
The likelihood decomposition in Theorem 2 is performed at pixel level. In fact, we can also obtain latent likelihood for intermediate feature maps in CNN. Let be the intermediate feature map at layer of input data in CNN. The log likelihood of can be lower bounded up to constant by where
As stated in Theorem 2, the NRM provides estimations of three quantities related to input data: data likelihood, reconstruction loss and joint likelihood of latent variables.
We discuss how they can be used for OoD detection.
Log likelihood of data As the direct modeling of training data distribution, likelihood seems to be the most straightforward metric for OoD detection. However, previous works have shown that higher likelihood could be assigned to OoD samples. We analyze why this is the case in our experiments (Section 3.2).
Reconstruction loss at all layers Since NRM closely resembles the architecture of CNN, we can obtain the reconstruction loss for feature maps at each layer in addition to the input image layer. Since CNNs have rich representations at intermediate layers Zeiler and Fergus (2013), reconstruction loss at those layers may be a useful OoD metric. We analyze this metric in Section 3.4.
Log joint likelihood of latent variables This is our proposed method for OoD detection. From the forward process of CNN, we can find the most likely label and the most likely set of latent variables to compute this value. Intuitively, the set of latent variables defines a "rendering path" for an input image. The "rendering path" specifies which pixels to render and the locations for rendering templates. We observe the performance of this metric in Section 3.3.
3.1 Experimental Set-Up
NRM can be used for diverse networks within the ConvNet family. In our experiments, we use the NRM architecture whose inference corresponding to the All Convolutional Net introduced in Springenberg et al. (2014). We preprocess our data by scaling the images into the range [-1, 1]. We use CIFAR-10 Krizhevsky and Hinton (2009) as the in-distribution dataset. For OoD datasets, we use SVHN Netzer et al. (2011), CIFAR-100 Krizhevsky and Hinton (2009), and CelebA Liu et al. (2015). For the CelebA dataset, we use the first 10k aligned and cropped images and multiply the data by so that the variance is smaller than that of CIFAR-10. We perform experiments using the three OoD detection metrics described in Section 2.2 on different datasets. A summary of these metrics is shown in Table 1.
3.2 Problems with Likelihood
We first show that likelihood is not a strong metric for OoD detection since it does not capture image structure. We found that in practice is dominated by the reconstruction loss term in the likelihood decomposition. From Table 1, we can see that the mean of is lower for SVHN than CIFAR-10. However, observing the spread of in Figure 3. We can see that the histograms for SVHN and CIFAR-10 share a large overlap, and the peak of SVHN lies within the peak of CIFAR-10 despite the fact that SVHN is OoD. In order to understand why this occurs, we visualized the top 25 images with the highest and lowest likelihood for an NRM trained on CIFAR-10 in Figure 3 (a). We notice that the brightness of the background contributes a lot to the likelihood. Images with duller background tend to have higher likelihood than those with high contrast. Intuitively, it is because images with low variance and pixel values closer to the mean image can be easily reconstructed, resulting in a lower reconstruction term in the likelihood decomposition. Therefore, likelihood estimation based on pixel space distributions is not reliable for OoD detection as it tends to distinguish data points based on their low dimensional statistics (mean, variance of pixels) rather than their semantic meaning.
3.3 Joint Distribution of Latent Variables
We show that our proposed metric, the joint distribution of latent variables , is a reliable metric for OoD detection. is the most likely label and is the most likely set of latent variables. These latent variables are obtained through the feed-forward CNN by Theorem 1. Since this theorem assumes non-negativity of intermediate rendered images , we use a modified training process to reduce negativity in . More details about the training process are described in Appendix B.
Performance on Dissimilar Datasets
To show the consistency of our method, we visualize the performance by plotting histograms of (shown in Figure 1) for in and out of distribution data. Unlike the likelihood histograms in figure 3, we can see separation in peaks of the distributions of . We note that the mean of is consistently lower for these OoD datasets while the histograms for in-distribution data, CIFAR-10 train and test sets, lie on top of each other.
Unlike the likelihood histograms in Figure 3, the peaks of OoD histograms lie to the left of the peaks of the in-distribution histograms. This is intuitive because is the likelihood of using a specific combination of latent variables when rendering from a label. Even if the model is able to reconstruct an OoD image well, it is unlikely that the chosen combination of latent variables will occur naturally from the original distribution, making this method is less sensitive to image variance. We can see that this feature is consistent across OoD distributions of smaller variance (SVHN, CelebA). To the best of our knowledge, this is the first time this has been achieved using a likelihood method.
We contrast the distribution of to the distribution of observed in Glow by Nalisnick et al. (2018) as shown in Figure 1. The in the of Glow corresponds to the last layer of latent variables. In Glow’s distribution, we observe some separation between the peaks of in-distribution images (CIFAR-10) and OoD images (SVHN) but also observe similar amount of separation between the peaks of train and test set of the in-distribution images. This separation suggests that the final layer of latent variables captures details which are specific to the training set instead of general features of the whole data distribution. In contrast, the joint distribution does not suffer from this problem because it uses information from the latent variables across all layers, causing it to not be overly sensitive to features only in the training set. Thus, is a better distinguisher between in-distribution and OoD samples.
Our method captures image structure within the latent variables, we plot histograms of for similar datasets. Since CIFAR-10 and CIFAR-100 are both subsets of the Tiny Images dataset, we expect these datasets to be more similar to each other than to SVHN or CelebA. From Figure 4 (a), we can see that CIFAR-10 and CIFAR-100 share a much larger area of overlap than SVHN or CelebA in Figure 1. The area of overlap is indicative of dataset similarity. To further analyze this feature of our metric, we generate category-specific histograms. A histogram of the CIFAR-10 automobile, CIFAR-10 truck, and CIFAR-100 pickup truck categories are portrayed in 4 (b). We can see that although the pickup truck class of CIFAR-100 is OoD, the histogram of
overlaps with the automobile and truck classes of CIFAR-10. We also observe large overlap between CelebA and the girl category of CIFAR-100. These large overlaps are indicative of image similarity, suggesting that our metric is able to capture the structure of images well. This contrasts the traditional likelihood in which we saw that background color rather than the image content influences the likelihood estimate.
3.4 An Evaluation of Reconstruction Loss
To demonstrate the performance of reconstruction loss at different layers of the NRM, we plot histograms of this metric at each layer, shown in Figure 5. We note that at pixel level, the histograms for all the datasets overlap, suggesting that the model is complex enough to reconstruct any input image. We observe that at some intermediate layers, for example layer 6, OoD distributions (SVHN, CelebA) have larger reconstruction loss than CIFAR-10. This is because the intermediate layers in CNN have richer representations of data, it is more difficult for the model to reconstruct those layers than the extreme layers. However, we usually do not know about the OoD distribution or which layers are sensitive to the difference between in-distribution samples and the OoD samples, this poses difficulties in using reconstruction loss in practice. In comparison, our proposed method of using does not need any prior knowledge of the OoD distribution, and consistently shows a meaningful separation between peaks of the histograms for in-distribution and OoD images.
A major strength of our method lies in the fact that it captures the structure within images, causing it to be less influenced by image variance. Overlaps within the distributions of are also interpretable: large overlaps represent high similarity between datasets. In addition, has an intuitive explanation: we can think of it as uncertainty in the choice of latent variables during the rendering process. We will now compare our method to other existing metrics for deep networks and propose future directions for research.
Comparison with Other OoD Metrics for Deep Networks
A recent study by Hendrycks et al. (2018) shows that OoD detection using deep networks can be improved by feeding the model with OoD samples. However, in practice, the OoD distribution is often unknown. Our method works under the challenging setting where we do not have prior knowledge on the OoD distribution. We hypothesize that our model will perform better if we do have access to OoD samples during training.
Another study by Nalisnick et al. (2019) proposes neural hybrid model consisting of a linear model defined on a set of features by a flow-based model. Their findings show a complete separation between the distribution of between SVHN and CIFAR-10 distributions when trained on SVHN. Although our results do not show a clear-cut a separation, we focus on the more difficult direction: assigning lower likelihood to SVHN (OoD samples with smaller variance) when trained on CIFAR-10. In addition, our method for OoD detection works for CNNs with ConvNet structure such as AlexNet, ResNet, and the All Convolutional Net, which are used more in practice than flow-based generative models.
Choi and Jang (2018) explores the use of Watanabe Akaike Information Criterion (WAIC) for OoD detection. Unlike our method which is a pure likelihood metric, WAIC is a hybrid metric calculated in terms of likelihood. This metric measures the gap between training and test distributions and has been shown to be effective for OoD detection with ensembles of GANs. Additionally it is shown to assign lower score to OoD image distributions with lower variance. Similar to our method, there is also overlap between distributions.
As can be seen from Figure 1, the histograms of still have overlap. The size of these overlaps should be reduced for robust OoD detection. A possible solution is weighting this metric during training. If we weight this term more highly, the model will try to reduce the uncertainty in the latent variables due to higher penalty, leading to more in-distribution class specific latent variables.
Another possible direction is using a hybrid metric in terms of latent variable likelihood to achieve better separation. For example, we can calculate WAIC in terms of our metric. Since our metric improves on traditional likelihood used by WAIC, this may lead to better separation between OoD samples and training samples.
Since we focused our research mainly on the difficult situation where the OoD data has less variance, it would be also meaningful to see how our metric performs in easier situations where the image variance is similar or higher. For instance, comparing our metric to state-of-the-art methods for OoD detection on different categories of MNIST.
We show that using neural rendering methods for OoD detection unifies likelihood and reconstruction based approaches to OoD detection. We show that while likelihood and reconstruction are difficult to use for OoD detection due to the sensitivity to pixels rather than to image structure, the joint likelihood of latent variables can capture image structure better and be used as a metric for OoD data. We find that this metric correctly assigns lower average likelihood to OoD images with lower variance than in-distribution images and capture similarities between distributions within overlaps. To the best of our knowledge, our metric is first to do so for deep generative models.
A. Anandkumar is supported in part by Bren endowed chair, Darpa PAI, Raytheon, and Microsoft, Google and Adobe faculty fellowships.
- An and Cho (2015) An, J. and Cho, S. (2015). Special Lecture on IE, 2:1–18.
- Choi and Jang (2018) Choi, H. and Jang, E. (2018). Generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392.
- Friston (2018) Friston, K. (2018). Does predictive coding have a future? Nature neuroscience, 21(8):1019.
- Guo et al. (2017) Guo, C., Pleiss, G., Sun, Y., and Weinberger, K. Q. (2017). On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1321–1330. JMLR. org.
- Hendrycks et al. (2018) Hendrycks, D., Mazeika, M., and Dietterich, T. G. (2018). Deep anomaly detection with outlier exposure. CoRR, abs/1812.04606.
- Ho et al. (2018) Ho, N., Nguyen, T., Patel, A. B., Anandkumar, A., Jordan, M. I., and Baraniuk, R. G. (2018). Neural rendering model: Joint generation and prediction for semi-supervised learning. CoRR, abs/1811.02657.
- Kingma and Dhariwal (2018) Kingma, D. P. and Dhariwal, P. (2018). Glow: Generative flow with invertible 1x1 convolutions. In Advances in Neural Information Processing Systems, pages 10215–10224.
- Krizhevsky and Hinton (2009) Krizhevsky, A. and Hinton, G. (2009). Learning multiple layers of features from tiny images. Technical report, Citeseer.
- Li et al. (2018) Li, D., Chen, D., Goh, J., and Ng, S.-k. (2018). Anomaly detection with generative adversarial networks for multivariate time series. arXiv preprint arXiv:1809.04758.
Liu et al. (2015)
Liu, Z., Luo, P., Wang, X., and Tang, X. (2015).
Deep learning face attributes in the wild.
Proceedings of International Conference on Computer Vision (ICCV).
- Nalisnick et al. (2018) Nalisnick, E., Matsukawa, A., Teh, Y. W., Gorur, D., and Lakshminarayanan, B. (2018). Do deep generative models know what they don’t know? arXiv preprint arXiv:1810.09136.
- Nalisnick et al. (2019) Nalisnick, E. T., Matsukawa, A., Teh, Y. W., Görür, D., and Lakshminarayanan, B. (2019). Hybrid models with deep and invertible features. CoRR, abs/1902.02767.
- Netzer et al. (2011) Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., and Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning.
- Springenberg et al. (2014) Springenberg, J. T., Dosovitskiy, A., Brox, T., and Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
- Zeiler and Fergus (2013) Zeiler, M. D. and Fergus, R. (2013). Visualizing and understanding convolutional networks. CoRR, abs/1311.2901.
- Zenati et al. (2018) Zenati, H., Foo, C. S., Lecouat, B., Manek, G., and Chandrasekhar, V. R. (2018). Efficient gan-based anomaly detection. arXiv preprint arXiv:1802.06222.
- Zhou and Paffenroth (2017) Zhou, C. and Paffenroth, R. C. (2017). Anomaly detection with robust deep autoencoders. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 665–674. ACM.
Appendix A A closer look at NRM
The notations used to define NRM are summarized in Table 2. The generation process in NRM is described in Section 2.1. Given a class category , NRM samples latent variables from a structured prior . Then, NRM starts to render images from coarse to fine details by a sequence of linear transformations defined by
. Then, NRM starts to render images from coarse to fine details by a sequence of linear transformations defined byand . The finest image is rendered at the bottom of NRM. Finnally, Gaussian pixel nosie is added to render the final image as .
Although the above generation process is complex enough to represent any input image, it is hard to generate natural images in reasonable time because of the huge number degrees of freedom of latent variables. Therefore, it is necessary to incorporate the prior knowledge we have on natural images to better structure the NRM. Classification models like CNN have been known to be able to get good representations for natural images through feature maps. Thus, NRM is designed so that its inference for optimal latent variables
Although the above generation process is complex enough to represent any input image, it is hard to generate natural images in reasonable time because of the huge number degrees of freedom of latent variables. Therefore, it is necessary to incorporate the prior knowledge we have on natural images to better structure the NRM. Classification models like CNN have been known to be able to get good representations for natural images through feature maps. Thus, NRM is designed so that its inference for optimal latent variablesyields the feed-forward process in CNN. Similarly, used to reconstruct an unlabeled image is the mostly likely label from CNN.
|all latent variables in layer , i.e.|
|rendering latent variables in layer|
|rendering latent variables in layer at pixel location||1|
|translation latent variables in layer|
|translation latent variables in layer at pixel location||1|
|intermediate rendered image in layer in general|
|most likely label for unlabeled data||1|
|optimal latent variables from feed-forward CNN|
|weight matrix of features learned from CNN at layer|
|zero-padding matrix at layer|
|bias term after convolutions in CNN at layer|
|pixel noise variance||1|
Appendix B Training procedure
For Theorem 1 to hold, the intermediate rendered images need to be non-negative. Under the non-negativity assumption, the inferred latent variables from feed-forward CNN are exact. However, this assumption does not necessarily hold in practice. In order to approximate the optimal latent variables, we can minimize the negativity loss during training the model, where is the negative part of intermediate rendered image . We find that this step is crucial for OoD detection using .
Appendix C A visualization of intermediate layers
Figure C.1 shows images with highest activation values for a randomly selected subset of features at layer 6 (The layer that consistently shows higher reconstruction loss for OoD samples). We see grouping within each feature and different features focus on different patterns. For instance, feature 1 focuses on striped structures, feature 2 detects the edges between foreground objects and background, and feature 3 is related to blueish backgrounds. In CNN, the first several layers tend to focus on large patch structures and the last several layers would exaggerate on details. The intermediate layer contains the richer representations of input data and thus it is more difficult to reconstruct OoD samples at intermediate layers.
Appendix D NRM’s reconstruction ability
Reconstruction for high likelihood and low likelihood images
We visualize the relationship between likelihood and reconstruction loss by generating the rendered images for the top 25 most likely and least likely images shown in Figure D.2. As we showed before, background color influences likelihood. We can see that when the background color is duller, the rendered images look almost exactly like the original image. When the background is brighter, the model is unable to render the same color pixels. Here, we can see that higher reconstruction loss leads to lower likelihood.
Reconstruction with the wrong label
As described in Section 2.1, the NRM reconstructs images using the labels and latent variables corresponding to translation and rendering switches to revert and respectively in the forward process. We reconstruct an image using the most likely values of which we obtain by passing through the forward process and record the positions of Maxpool and ReLU states. To understand how much the label contributes to the quality of the reconstructed image, we reconstruct with a false label using the optimal latent variables learned in the forward process. Two examples are shown in Figure D.3: reconstruction of plane from false label "cat" and reconstruction of cat from false label "plane". We see that the original and reconstructed images still have the same structure, suggesting that the information needed for good reconstruction is completely captured by the latent variables.
Appendix E Mean of latent variables
We visualize the mean of redering latent variables at all layers for the in-distribution (CIFAR-10) and OoD distribution (SVHN) datasets in Figure E.4. We see that for the CIFAR-10 train and test datasets, the mean of rendering latent variables are almost identical. However, this is not the case for SVHN. While the NRM can reconstruct SVHN well, the rendering path (specified by latent variables) differ greatly from that for CIFAR-10 images. This shows the potential of latent variable in OoD detection.