Deep learning models provide state-of-the-art performance in various applications such as image classification, caption generation, sequence modeling and machine translation. However, such performance is based on the assumption that the training and testing data are sampled from the similar distribution . On out-of-distribution (OOD) samples, deep learning models can fail silently by producing high confidence in their incorrect predictions even for completely unrecognizable or irrelevant inputs . For instance, the models trained with MNIST can produce 91% confidence on random noise 
. Similar case also shown in Fig. red1. Facing the distribution mismatch, the prediction probability from a softmax output is likely to poorly correspond to the true certainty. Unfortunately, there is very little control over test distribution in real-world deployments due to dynamically changing environments or malicious attacks. In fact, well calibrating the predictive uncertainty of DNNs is important for many production systems, authentication devices, medical diagnosis and self-driving vehicles .
Being overconfident to nonsensical inputs has raised the concern of artificial intelligence (AI) safety, which seeks to develop a model that can identify whether they have encountered new kinds of inputs,OOD samples . Formally, the OOD detection can be formulated as a binary classification/verification problem to verify a test sample is from training distribution ( in-distribution, ID) or sufficiently different from it ( OOD). Notably, the number of OOD training data can be virtually infinite.
 proposed a baseline using the maximum value of the posterior softmax probabilities. It can be improved by adding small controlled permutations for the input or using temperature scaling in the softmax function . Another possible improvement is not only modeling the in-distribution samples, but also introducing the OOD samples in the training stage . However, listing all possible OOD distributions is usually intractable. Most prior works on this topic typically re-train a classification network with a modified structure or an additional optimization objective. This can make it hard to maintain the original classification performance and be computationally expensive. Their hyper-parameters ( threshold for verification) also need to be tuned with OOD examples, which are usually not accessible in the real-world.
Moreover, the previous methods are essentially based on the statistics of output/feature space of the softmax-based classifier, which is not applicable to the structured predictions,image caption.
In this paper, we propose to verify the predictions of deep discriminative models by using deep generative models that try to generate the input given the prediction of the discriminative model. We call this concept ”deep verifier networks (DVN)”.
We provide a concrete algorithm for the deep verifier. The high level idea is simple: given an input-output pair from the predictive model , we inversely train a verification model
, in order to estimate the density ofgiven prediction
. In order to estimate this likelihood, we design a novel model based on a conditional variational autoencoder imposed with disentanglement constraints. To compute, we condition on that are already predicted by the classifier to be verified. We assume the prediction is correct and try to see whether the input is consistent with , which following a verification protocol. Although many different kinds of density estimators can be also used in theory, we argue that the design of our model is robust to OOD samples and adversarial attacks, due to the use of latent variables with explicit and accurate density estimations.
It not only can be trained without OOD samples, but also eliminates the need of OOD validation set for hyper-parameter ( threshold) tuning. This is a more realistic setting, since we can not predict what kind of OOD may occur (
CIFAR-10 as the in-distribution, and the OOD can be SVHN, ImageNet, LSUN or adversarial sample). Moreover, it does not require the reprocessing of the input samples, changing the architecture or re-training with additional loss function for the original classification model. Therefore, the prediction task of the original DNNs will not be distracted, since it can be fixed when we train the DVN.
The proposed solution achieves the state-of-the-art performance for detecting either OOD or adversarial samples in all tested classification scenarios, and can be generalized well for structured prediction tasks ( image caption). In Sec 3.4, we analysed why DVN is useful for both OOD (low density in ) and Adv sample (high density in ).
2 Related Work
Detecting OOD samples in low-dimensional space using density estimation, nearest neighbor and clustering analysis have been well-studied. However, they are usually unreliable in high-dimensional space, image .
OOD detection with deep neural networks has been recently developed. found that pre-trained DNNs have higher maximum softmax probability for in-distribution examples than anomalous one. Based on this work,  proposed that the maximum softmax probability can be more separable between in/out-of distribution samples when using adversarial perturbations for pre-processing in the training stage.  augmented the classifier with a confidence estimation branch, and adjusted the softmax distribution using the predicted confidence score in the training stage.  trained a classifier simultaneously with a GAN, and an additional objective of the classifier is to produce low confidence on generated samples.  proposed to use the enormous real images rather than the generated OOD samples to train the detector.  applied a margin entropy loss over the softmax output, in which a part of training data is labeled as OOD sample and the partition of in-distribution and OOD is changing to train an ensemble classifier. These improvements based on  require re-training the model with different modifications.
obtained the class conditional Gaussian distribution using Gaussian discriminative analysis, and the confidence score is defined using the Mahalanobis distance between the sample and the closest class-conditional Gaussian distribution. By modeling each class of in-distribution samples independently, it showed remarkable results for OOD and adversarial attacks detection. Noticing that its reported best performance also needs the input pre-processing and model change. Besides,[21, 31, 20] need OOD examples for hyper-parameter-validations and require two forward and one backward passes in the test stage. Another limitation of the aforementioned methods is that they can only target for the classifier with softmax output.
Recently,  proposed an unsupervised OOD detector by estimating the Watanabe-Akaike Information Criterion, which is in turn estimated using an ensemble of generative models. The goal of our model is essentially different from WAIC in that rather than just detect OOD samples, DVN aims to verify the predictions of a supervised predictive model. DVN unify detection of three different anomalies into one model: OOD samples, adversarial samples, and incorrect predictions made by the predictive models.
This paper targets at the problem of verification of deep predictive models. Let be an input and be the response to be predicted. The in-distribution samples are sampled from the joint data-generating distribution . We propose to reverse the order of prediction process of modeling and try to compute the conditional probability . To compute , we condition on already predicted by the classifier to be verified. We assume the prediction is correct and try to see whether the input is consistent with .
The predictive model to be verified is trained on a dataset set drawn from the , and may encounter the sample from both and ( out-of-distribution or adversarial samples) at test time. Note there is some subtle difference between OOD and adversarial examples, we assume OOD samples have low density in , while adversarial samples may have high density in if we admit Gaussian noise to be injected into the data, but for predicted by the predictive model from , should have low density in .
Our goal is to verify the pair predicted by the predictive model given . The basic idea is to train a verifier network as an approximation to the inverse posterior distribution . Modelling instead of as a verification has many advantages: (1) Usually is much more diverse than the conditional distribution , so modelling is much easier than modelling . (2) Modelling allows us to provide a unified framework for verifying OODs, adversarial examples, and mis-classifications of the classifier.
3.1 Basic Model
Our basic model is a conditional variational auto-encoder shown in Fig. red2. The model is composed of two deep neural networks, a stochastic encoder which takes input to predict a latent variable and a decoder which takes both latent variable and the label to reconstruct . The encoder and decoder are jointly trained to maximize the evidence lower bound (ELBO):
The equality holds if and only if , where is the ground truth posterior.
3.2 Disentanglement constraints for anomaly detection
One problem of training the conditional variational auto-encoder is that the decoder can ignore the effect of input label , passing all information through the continuous latent variable . This is not desirable as we want to use the decoder to model the conditional likelihood , not . A simple solution to this problem is to add a disentanglement constraint to the model, such that and are independent features. In other words, we want to minimize the mutual information between and . Namely, besides the ELBO loss, we also minimize the mutual information estimator together with the loss, yielding:
In this paper, we use deep Infomax  as the proxy for minimizing the mutual information (MI) between and . The mutual information estimator is defined as:
where is the softplus function and is a discriminator network. Just like GANs, is trained to maximize , in order to get a better estimation of the (JS-version) mutual information, while is trained to minimize .
3.3 Measuring the likelihood as anomaly score
Our anomaly verification criterion is to measure the log-likelihood
for test samples. Importance sampling is a possible solution to provide an unbiased estimate of. Following IWAE , the -sample importance weighting estimate of the log-likelihood is a lower bound of the ground truth likelihood :
We use the fact that to estimate the likelihood. As will be discussed below, we want the decoder be evaluated on the same input distribution for as it is trained. The quantities is a monotonic series of lower bounds of exact log-likelihood (). It has the property that when , . We choose, say , as a sufficiently good approximation to the exact likelihood.
In our algorithm, the distribution of feed into decoder during training is . However, this distribution can be drastically different from the prior . So instead of using as a prior for the decoder network, we use as the prior distribution for , and estimate the likelihood of under this directed generative model, where . In order to estimate the density of , we propose to train an additional discriminator to distinguish and . is trained to discriminate the real distribution of latent variable ( is the data distribution of , is the encoder network) and Gaussian prior distribution , with ordinary GAN loss. Both and are easy to sample, so a discriminator is easy to train with the samples. The optimal discriminator is
After is trained and is known ( Gaussian), we can compute .
We classify a sample as an OOD sample if the log-likelihood is below the threshold and the is an in-distribution sample, otherwise.
We set to the threshold corresponding to 95% true positive rate (TPR), where the TPR refer to the probability of in-distribution validation samples are correctly verified as the in-distribution. Therefore, the threshold selection in our model is only tuned on in-distribution validation datasets, while most of previous methods also need the OOD samples for hyper-parameter validation [21, 20]. We note that the distribution of OOD samples is usually not accessible before the system deployment.
3.4 Theoretical Justification
In the problem the loss function we optimize can be written as:
where is the decoder we are training. Also denote , , , . We have the following theorem that justifies the two parts of losses above.
(i) is a variational lower bound of .The bound is tight when is enough expressive and are conditionally independent given .
(ii) Assume is sampled from and is sampled from . The condition that
are independent random variables is a necessary condition of that the generative modeland the encoder are perfect. More precisely, assume that , and . Then we have , where , and . This justifies the loss .
For (i), we have:
The bound is tight if , which is equivalent to if are conditionally independent give .
For (ii), we notice that means and . So it’s easy to verify .
3.5 Intuitive Justifications
We now present an intuitive justification for the above algorithm. First, consider our training loss:
It is well known that deep neural networks can be generalized well for in-distribution samples, but their behaviors are undefined for out-of-distribution samples. Suppose is an out-of-distribution sample, with be the corresponding output of the classifier. Then the behavior of the stochastic encoder is undefined. We denote the distribution to train . There are two cases: (1) maps to with low density in . This case can be easily detected because is easily computable. In this case the second term in Eq. 9 is a large negative number. (2) maps to with high density in . Then since we train the decoder network with the input distribution , so is an in-distribution sample for the decoder . Thus should maps to some in-distribution with class label . Since input is an OOD sample and reconstruction is an in-distribution sample, so the reconstruction has to be bad. In this case, the first term in Eq. 9 is a large negative number. So in both cases, the log-likelihood score derived from DVN should be a large negative number. This is why our model is robust to both adversarial and OOD samples.
3.6 Using Density Estimators other than VAEs
In theory, we can also use density estimators other than conditional VAE (such as auto-regressive models and flow-based models) to estimate . However, all these models have drawbacks that make them not suitable for this task. Auto-regressive models are quite slow and prone to completely ignore the conditioned code . Flow-based models are not robust to adversarial examples thus sometimes assigns higher likelihood on OOD samples than in-distribution samples . We have intuitively explained in Sec. 3.5 why our cVAE based model does not suffer from the same problem as flow-based models.
4 Experimental results
In this section, we demonstrate the effectiveness of our DVN on several classification benchmarks, and show its potential for image caption task. We choose the DenseNet  and ResNet  as the backbone of our classifier.
For evaluation, we measure the following metrics as indicator of the effectiveness of the certainty scores in distinguishing in-/out-of distribution images. Following the definition in previous works, the in-distribution images are positive samples, while OOD is the negative sample. True negative rate (TNR) or false positive rate (FPR) at 95% true positive rate (TPR). Let TP, TN, FP, and FN denote true positive, true negative, false positive and false negative, respectively. We measure TNR = TN / (FP+TN) or FPR = FP / (FP+TN), when TPR = TP / (TP+FN) is 95%.
Area under the receiver operating characteristic curve (AUROC). The ROC curve is a graph plotting TPR against the false positive rate = FP / (FP+TN) by varying a threshold. The area under ROC is the probability that an in-distribution sample has a higher certainty score than an OOD sample. Area under the precision-recall curve (AUPR). The PR curve is a graph plotting the precision = TP / (TP+FP) against recall = TP / (TP+FN) by varying the threshold. Verification accuracy is defined by , where is the predicted certainty score, or is the probability of appearing of positive or negative samples in the test set. It corresponds to the maximum classification probability over all possible thresholds.
Following the definition in previous works, the in-distribution images are positive samples, while OOD is the negative sample. We note that AUROC, AUPR and verification accuracy are threshold(
)-independent evaluation metrics.
|Validation on OOD samples||Validation on adversarial samples|
|In-Dist||OOD||TNR@TPR 95%||AUROC||Verification acc.||TNR@TPR 95%||AUROC||Verification acc.|
|ODIN / SUF / Our||ODIN / SUF /Our|
4.1 Verifying out-of-distribution samples for classification
Datasets. The Street View Housing Numbers (SVHN) dataset  consists of color images depicting house numbers, which range from 0 to 9. Images have a resolution of 3232. For our tests, we use the official training set split which contains 73,257 images, and the test set split, which has 26,032 images. The CIFAR-10/100 dataset  consists of 10/100 classes colour images. The training set has 50,000 images, while the test set has 10,000 images. The TinyImageNet dataset111https://tiny-imagenet.herokuapp.com/ is a subset of the ImageNet dataset . Its test set contains 10,000 images from 200 different classes. It contains the original images, downsampled to 32
32 pixels. The Large-scale Scene UNderstanding dataset (LSUN)  has a test set with 10,000 images from 10 different classes. The LSUN (crop) and LSUN (resize) are created in a similar downsampling manner to the TinyImageNet datasets. The Uniform noise and Gaussian noise dataset are with 10,000 samples respectively, which are generated by drawing each pixel in a 3221].
Setups. For fair comparisons, the backbones of classifier used here are the 100-layer DenseNet with growth rate 12 [21, 20] and 34-layer ResNet . They are trained to classify the SNHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets, of which test set is regarded as the in-distribution dataset in our testing stage. The dataset different from its training dataset is considered as OOD. We use four conv or deconv layer for the encoder and decoder structure, and
is a 128-dimension vector. In classification setting,is a one-hot vector. Discriminator is a two-layer fully connected layer network with sigmoid output and binary cross-entropy loss. The hyper-parameters in previous methods [21, 20] need to be tuned on a validation set with 1,000 images from each in-distribution and OOD pair. Noticing that the threshold of DVN is tuned on in-distribution only. This aligns a more realistic scenario, since the OOD in real-world applications is usually uncontrollable.
Effects of the threshold and performance across datasets. How the hyper-parameters ( ) generalize across different OOD datasets is a challenging aspect of the system deployment. The most of previous methods target at the case that we have a small set of OOD samples, the can be calibrated by evaluating the verification error at different . However, the more realistic scenario is that we do not have access to the OOD examples that sampled from the OOD in testing stage.
A promising trend is improving the performance on an unknown OOD when using the model tuned on a similar OOD [21, 20]. We argue that our DVN is essentially free from such worries, since it does not need any OOD sample in the validation.
To investigate how the threshold affects the FPR and TPR, Fig. 3 shows their relationship when the CIFAR-10 is used for training and meet different OOD in test stage with DenseNet backbone. Noticing that the TPR (red axis) is used for in-distribution dataset CIFAR-10 (red dashed line), while FPR is used for OODs. We can observe that the threshold corresponding to 95% TPR can produce small FPRs on all OOD datasets. When the OOD images are sampled from some simple distributions ( Gaussian or Uniform), the available window of threshold can be larger.
Comparison with SOTA. The main results are summarised in Table. red1. For each in&out-of-distribution pair, we report the performance of ODIN , SUF  and our DVN. Notably, DVN consistently outperforms the previous methods and achieves a new state-of-the-art. As shown in Table. red2 that the pre-processing and model change in ODIN and SUF can unavoidably increase the error rate of the original classification for in-distribution test, while DVN does not affect the classification performance.
Considering the technical route of DVN is essentially different from ODIN and SUF, we compare it with the baseline, maximum softmax probability (MSP) , w.r.t. ROC and PR in Fig. 4. DVN shares some nice properties of MSP, fixed classifier and single forward pass at the test stage, while DVN outperforms MSP by a large margin.
Ablation study. Disentangle from is critical to our model. Table red3 validates the contribution of this manipulation w.r.t. both threshold dependent and independent metrics. One can easy to see that the DVN with disentanglement can outperforms its counterparts which without disentanglement significantly. This also implies DVN has successfully learned to minimize the MI of and .
Since modeling is the core of DVN, we cannot remove . Here, we give another ablation study that without modify with . The results are shown in Table red4.
4.2 Verifying adversarial samples
Deal with popular adversarial attackers. To detect the adversarial samples, we train our DenseNet and ResNet-based classification network and DVN using the training set of CIFAR-10, CIFAR-100 or SVHN datasets, and their corresponding test sets are used as the positive samples for the test. Following the setting in , we applied several attack methods to generate the negative samples, such as basic iterative method (BIM) , Deepfool , Carlini-Wangner (CW) . The network structures are the same as OOD verification.
We compare DVN with the strategies in KD+PU , LID , SUF  in Table red5, and show that the DVN can achieve the the state-of-the-art performance in most cases w.r.t. AUROC. Following the ”detection of unknown attack setting”, we can not access to the adversarial examples as the test stage in the training or validation. Therefore, the previous works choose to use another attack generation method, fast gradient sign method (FGSM) , to construct a validation set of adversarial sample. In here, we do not need another attack method as a reference, since the threshold of DVN only related to the validation set of in-distribution samples. Moreover, the pre-processing and model change as in  are not required in DVN.
Deal with adaptive attackers be aware of deep verifier In fact, our deep verifier cannot be fooled by white-box adversarial attacks. The reason is a bit involved. Assume an adversarial example from class but is misclassified as . The decoder takes two inputs, the label and the latent variable . Adversarial samples are generated by modifying inputs of classifiers to another sample that looks very similar but is not from the training distribution. However, in our verifier model, we have perfect knowledge about the input distribution of the decoder network, so no adversarial attacks can fool the decoder network without being detected (e.g. with a low density). Although the encoder network can be fooled with , fooling the encoder network alone is not enough to fool the entire verifier, because the reconstruction of the decoder (which is an image from class ) cannot match the input of the encoder (which is an image from class ) in this case. The verifier will output a low-likelihood even if the encoder is fooled by adversarial attacks, and there is no way to fool the decoder network. This is in fact a core reason why our method works.
Deal with spatially transformed Adv and unrestricted Adv The recently developed spatially transformed adversarial examples  and unrestricted Adv  are not essentially different from normalized attackers for DVN. If the attack can successfully fool classifier and predict , the conditional reconstruction will be unrealistic and has a small likelihood. Even spatial transformation tends to better preserve the perceptual quality, its conditional generation with cannot keep this. Following Sec 4.2, using DenseNet and CIFAR-10 as in-distribution, the AUROC(%) of  and  are 84.2 and 87.8 respectively.
4.3 Verifying out-of-distribution samples for image caption
For verifying OOD in image caption task, we choose Oxford-102 and CUB-200 as the in-distribution datasets. Oxford-102 contains 8,189 images of 102 classes of flower. CUB-200 contains 200 bird species with 11,788 images. Each of them has 10 descriptions that are provided by . For these two datasets, we use 80% samples to train our captioner, and the remaining 20% for testing in a cross-validation manner. The LSUN and Microsoft COCO dataset are used as our OOD.
as our decoder’s backbone, and replace its Normal distribution vector as the output of encoder. A character level CNN-RNN model  is used for the text embedding which produces the 1,024-dimension vector given the description, and then projected to a 128-dimension code . We configure the encoder and decoder with four convolutional layers and the latent vector is a 100-dimension vector. The input of discriminator is the concatenation of and , which result in a 228-dimension vector. A two-layer fully connected network with sigmoid output unit is used as the discriminator. Table red6 summarizes the performance of DVN in image caption task and can be regarded as a powerful baseline.
|In-Dist||OOD||Validation on OOD samples|
|TNR@TPR 95%||AUROC||Verif acc.|
5 Conclusion and Future Works
In this paper, we propose to enhance the performance of anomaly detection by verifying predictions of deep discriminative models using deep generative models. The idea is to train a conditional verifier network as an approximation to the inverse posterior distribution. We propose our model Deep Verifier Networks (DVN) which is based on conditional variational auto-encoders with disentanglement constraints. We show our model is able to achieve state-of-the-art performance on benchmark OOD detection and adversarial example detection tasks.
For future work, it would be interesting to integrate DVN to safe AI systems. For instance, ordinary image classifiers such as DenseNet have perfect accuracy for in-distribution queries, but their behaviors are undefined on adversarial queries. Robust image classifier sacrifices some accuracy for robustness to adversarial examples. We can use DVN to form a two step prediction procedure: first using an ordinary classifier to get an initial prediction, then verify the prediction with DVN. If the prediction does not pass the verification, we switch to the robust image classifier. We believe our method would provide some illuminations for AI safety systems.
-  (2016) Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. Cited by: §1, §1.
-  (2015) Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Cited by: §3.6.
-  (2015) Importance weighted autoencoders. arXiv preprint arXiv:1509.00519. Cited by: §3.3.
-  (2017) Adversarial examples are not easily detected: bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14. Cited by: §4.2.
-  (2018) WAIC, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392. Cited by: §2.
-  (2009) Imagenet: a large-scale hierarchical image database. In , pp. 248–255. Cited by: §4.1.
-  (2018) Learning confidence for out-of-distribution detection in neural networks. Cited by: §2.
-  (2017) Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. Cited by: §4.2.
-  (2016) Deep learning. MIT press. Cited by: §1.
-  (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §4.2.
On calibration of modern neural networks.
Proceedings of the 34th International Conference on Machine Learning-Volume 70, pp. 1321–1330. Cited by: §1.
-  (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Table 1, §4.
-  (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR. Cited by: §1, §1, §2, Figure 4, §4.1.
Deep anomaly detection with outlier exposure. ICLR. Cited by: §1, §2.
-  (2018) Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Cited by: §3.2.
-  (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.
-  (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §4.1.
-  (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §4.2.
-  (2018) Training confidence-calibrated classifiers for detecting out-of-distribution samples. ICLR. Cited by: §2.
-  (2018) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NIPS. Cited by: §1, §2, §3.3, §4.1, §4.1, §4.1, §4.2, §4.2, Table 1.
-  (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR. Cited by: §1, §2, §2, §2, §3.3, §4.1, §4.1, §4.1, §4.1, Table 1.
-  (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. Cited by: §4.2.
-  (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §4.2.
-  (2018) Do deep generative models know what they don’t know?. arXiv preprint arXiv:1810.09136. Cited by: §3.6.
-  (2011) Reading digits in natural images with unsupervised feature learning. Cited by: §4.1.
Detecting out-of-distribution samples using low-order deep features statistics. Openreview. Cited by: §2.
A review of novelty detection. Signal Processing 99 (6), pp. 215–249. Cited by: §2.
-  (2016) Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–58. Cited by: §4.3, §4.3.
-  (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Cited by: §4.3.
-  (2018) Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems, pp. 8312–8323. Cited by: §4.2.
-  (2018) Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. ECCV. Cited by: §2, §2.
-  (2018) Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612. Cited by: §4.2.
-  (2015) Show, attend and tell: neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. Cited by: §4.3.
-  (2015) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §4.1.
-  (2016) Wide residual networks. arXiv preprint arXiv:1605.07146. Cited by: Table 1.