Deep Verifier Networks: Verification of Deep Discriminative Models with Deep Generative Models

11/18/2019 ∙ by Tong Che, et al. ∙ Université de Montréal Harvard University 26

AI Safety is a major concern in many deep learning applications such as autonomous driving. Given a trained deep learning model, an important natural problem is how to reliably verify the model's prediction. In this paper, we propose a novel framework — deep verifier networks (DVN) to verify the inputs and outputs of deep discriminative models with deep generative models. Our proposed model is based on conditional variational auto-encoders with disentanglement constraints. We give both intuitive and theoretical justifications of the model. Our verifier network is trained independently with the prediction model, which eliminates the need of retraining the verifier network for a new model. We test the verifier network on out-of-distribution detection and adversarial example detection problems, as well as anomaly detection problems in structured prediction tasks such as image caption generation. We achieve state-of-the-art results in all of these problems.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Deep learning models provide state-of-the-art performance in various applications such as image classification, caption generation, sequence modeling and machine translation. However, such performance is based on the assumption that the training and testing data are sampled from the similar distribution [9]. On out-of-distribution (OOD) samples, deep learning models can fail silently by producing high confidence in their incorrect predictions even for completely unrecognizable or irrelevant inputs [1]. For instance, the models trained with MNIST can produce 91% confidence on random noise [13]

. Similar case also shown in Fig. red1. Facing the distribution mismatch, the prediction probability from a softmax output is likely to poorly correspond to the true certainty. Unfortunately, there is very little control over test distribution in real-world deployments due to dynamically changing environments or malicious attacks

[11]. In fact, well calibrating the predictive uncertainty of DNNs is important for many production systems, authentication devices, medical diagnosis and self-driving vehicles [20].

Being overconfident to nonsensical inputs has raised the concern of artificial intelligence (AI) safety, which seeks to develop a model that can identify whether they have encountered new kinds of inputs,

OOD samples [1]. Formally, the OOD detection can be formulated as a binary classification/verification problem to verify a test sample is from training distribution ( in-distribution, ID) or sufficiently different from it ( OOD). Notably, the number of OOD training data can be virtually infinite.

Figure 1: A network trained on CIFAR-10 with ten-class softmax output will predict the resized 32x32x3 CVF logo (OOD sample w.r.t. CIFAR-10) as deer with high confidence.

[13] proposed a baseline using the maximum value of the posterior softmax probabilities. It can be improved by adding small controlled permutations for the input or using temperature scaling in the softmax function [21]. Another possible improvement is not only modeling the in-distribution samples, but also introducing the OOD samples in the training stage [14]. However, listing all possible OOD distributions is usually intractable. Most prior works on this topic typically re-train a classification network with a modified structure or an additional optimization objective. This can make it hard to maintain the original classification performance and be computationally expensive. Their hyper-parameters ( threshold for verification) also need to be tuned with OOD examples, which are usually not accessible in the real-world.

Moreover, the previous methods are essentially based on the statistics of output/feature space of the softmax-based classifier, which is not applicable to the structured predictions,

image caption.

In this paper, we propose to verify the predictions of deep discriminative models by using deep generative models that try to generate the input given the prediction of the discriminative model. We call this concept ”deep verifier networks (DVN)”.

We provide a concrete algorithm for the deep verifier. The high level idea is simple: given an input-output pair from the predictive model , we inversely train a verification model

, in order to estimate the density of

given prediction

. In order to estimate this likelihood, we design a novel model based on a conditional variational autoencoder imposed with disentanglement constraints. To compute

, we condition on that are already predicted by the classifier to be verified. We assume the prediction is correct and try to see whether the input is consistent with , which following a verification protocol. Although many different kinds of density estimators can be also used in theory, we argue that the design of our model is robust to OOD samples and adversarial attacks, due to the use of latent variables with explicit and accurate density estimations.

It not only can be trained without OOD samples, but also eliminates the need of OOD validation set for hyper-parameter ( threshold) tuning. This is a more realistic setting, since we can not predict what kind of OOD may occur (

CIFAR-10 as the in-distribution, and the OOD can be SVHN, ImageNet, LSUN or adversarial sample). Moreover, it does not require the reprocessing of the input samples, changing the architecture or re-training with additional loss function for the original classification model. Therefore, the prediction task of the original DNNs will not be distracted, since it can be fixed when we train the DVN.

The proposed solution achieves the state-of-the-art performance for detecting either OOD or adversarial samples in all tested classification scenarios, and can be generalized well for structured prediction tasks ( image caption). In Sec 3.4, we analysed why DVN is useful for both OOD (low density in ) and Adv sample (high density in ).

2 Related Work

Detecting OOD samples in low-dimensional space using density estimation, nearest neighbor and clustering analysis have been well-studied

[27]. However, they are usually unreliable in high-dimensional space, image [21].

OOD detection with deep neural networks has been recently developed.

[13] found that pre-trained DNNs have higher maximum softmax probability for in-distribution examples than anomalous one. Based on this work, [21] proposed that the maximum softmax probability can be more separable between in/out-of distribution samples when using adversarial perturbations for pre-processing in the training stage. [7] augmented the classifier with a confidence estimation branch, and adjusted the softmax distribution using the predicted confidence score in the training stage. [19] trained a classifier simultaneously with a GAN, and an additional objective of the classifier is to produce low confidence on generated samples. [14] proposed to use the enormous real images rather than the generated OOD samples to train the detector. [31] applied a margin entropy loss over the softmax output, in which a part of training data is labeled as OOD sample and the partition of in-distribution and OOD is changing to train an ensemble classifier. These improvements based on [13] require re-training the model with different modifications.

[20, 26] claimed to explore the DNNs’ feature space rather than the output posterior distribution, which is applicable to the pre-trained softmax neural networks. [20]

obtained the class conditional Gaussian distribution using Gaussian discriminative analysis, and the confidence score is defined using the Mahalanobis distance between the sample and the closest class-conditional Gaussian distribution. By modeling each class of in-distribution samples independently, it showed remarkable results for OOD and adversarial attacks detection. Noticing that its reported best performance also needs the input pre-processing and model change. Besides,

[21, 31, 20] need OOD examples for hyper-parameter-validations and require two forward and one backward passes in the test stage. Another limitation of the aforementioned methods is that they can only target for the classifier with softmax output.

Recently, [5] proposed an unsupervised OOD detector by estimating the Watanabe-Akaike Information Criterion, which is in turn estimated using an ensemble of generative models. The goal of our model is essentially different from WAIC in that rather than just detect OOD samples, DVN aims to verify the predictions of a supervised predictive model. DVN unify detection of three different anomalies into one model: OOD samples, adversarial samples, and incorrect predictions made by the predictive models.

3 Methodology

This paper targets at the problem of verification of deep predictive models. Let be an input and be the response to be predicted. The in-distribution samples are sampled from the joint data-generating distribution . We propose to reverse the order of prediction process of modeling and try to compute the conditional probability . To compute , we condition on already predicted by the classifier to be verified. We assume the prediction is correct and try to see whether the input is consistent with .

The predictive model to be verified is trained on a dataset set drawn from the , and may encounter the sample from both and ( out-of-distribution or adversarial samples) at test time. Note there is some subtle difference between OOD and adversarial examples, we assume OOD samples have low density in , while adversarial samples may have high density in if we admit Gaussian noise to be injected into the data, but for predicted by the predictive model from , should have low density in .

Figure 2: The architecture of our proposed Deep Verifier Network (DVN). We extract the embedding from the ground-of-truth label of in-distribution example in the training stage while using the trained and fixed classifier or captioner prediction of testing image in testing stage.

Our goal is to verify the pair predicted by the predictive model given . The basic idea is to train a verifier network as an approximation to the inverse posterior distribution . Modelling instead of as a verification has many advantages: (1) Usually is much more diverse than the conditional distribution , so modelling is much easier than modelling . (2) Modelling allows us to provide a unified framework for verifying OODs, adversarial examples, and mis-classifications of the classifier.

3.1 Basic Model

Our basic model is a conditional variational auto-encoder shown in Fig. red2. The model is composed of two deep neural networks, a stochastic encoder which takes input to predict a latent variable and a decoder which takes both latent variable and the label to reconstruct . The encoder and decoder are jointly trained to maximize the evidence lower bound (ELBO):

(1)

The equality holds if and only if , where is the ground truth posterior.

3.2 Disentanglement constraints for anomaly detection

One problem of training the conditional variational auto-encoder is that the decoder can ignore the effect of input label , passing all information through the continuous latent variable . This is not desirable as we want to use the decoder to model the conditional likelihood , not . A simple solution to this problem is to add a disentanglement constraint to the model, such that and are independent features. In other words, we want to minimize the mutual information between and . Namely, besides the ELBO loss, we also minimize the mutual information estimator together with the loss, yielding:

(2)

In this paper, we use deep Infomax [15] as the proxy for minimizing the mutual information (MI) between and . The mutual information estimator is defined as:

(3)

where is the softplus function and is a discriminator network. Just like GANs, is trained to maximize , in order to get a better estimation of the (JS-version) mutual information, while is trained to minimize .

3.3 Measuring the likelihood as anomaly score

Our anomaly verification criterion is to measure the log-likelihood

for test samples. Importance sampling is a possible solution to provide an unbiased estimate of

. Following IWAE [3], the -sample importance weighting estimate of the log-likelihood is a lower bound of the ground truth likelihood :

(4)

We use the fact that to estimate the likelihood. As will be discussed below, we want the decoder be evaluated on the same input distribution for as it is trained. The quantities is a monotonic series of lower bounds of exact log-likelihood (). It has the property that when , . We choose, say , as a sufficiently good approximation to the exact likelihood.

In our algorithm, the distribution of feed into decoder during training is . However, this distribution can be drastically different from the prior . So instead of using as a prior for the decoder network, we use as the prior distribution for , and estimate the likelihood of under this directed generative model, where . In order to estimate the density of , we propose to train an additional discriminator to distinguish and . is trained to discriminate the real distribution of latent variable ( is the data distribution of , is the encoder network) and Gaussian prior distribution , with ordinary GAN loss. Both and are easy to sample, so a discriminator is easy to train with the samples. The optimal discriminator is

(5)

After is trained and is known ( Gaussian), we can compute .

We classify a sample as an OOD sample if the log-likelihood is below the threshold and the is an in-distribution sample, otherwise.

(6)

We set to the threshold corresponding to 95% true positive rate (TPR), where the TPR refer to the probability of in-distribution validation samples are correctly verified as the in-distribution. Therefore, the threshold selection in our model is only tuned on in-distribution validation datasets, while most of previous methods also need the OOD samples for hyper-parameter validation [21, 20]. We note that the distribution of OOD samples is usually not accessible before the system deployment.

3.4 Theoretical Justification

In the problem the loss function we optimize can be written as:

(7)

where is the decoder we are training. Also denote , , , . We have the following theorem that justifies the two parts of losses above.

Theorem 1
  • (i) is a variational lower bound of .The bound is tight when is enough expressive and are conditionally independent given .

  • (ii) Assume is sampled from and is sampled from . The condition that

    are independent random variables is a necessary condition of that the generative model

    and the encoder are perfect. More precisely, assume that , and . Then we have , where , and . This justifies the loss .

Proof 3.1

For (i), we have:

(8)

The bound is tight if , which is equivalent to if are conditionally independent give .

For (ii), we notice that means and . So it’s easy to verify .

3.5 Intuitive Justifications

We now present an intuitive justification for the above algorithm. First, consider our training loss:

(9)

It is well known that deep neural networks can be generalized well for in-distribution samples, but their behaviors are undefined for out-of-distribution samples. Suppose is an out-of-distribution sample, with be the corresponding output of the classifier. Then the behavior of the stochastic encoder is undefined. We denote the distribution to train . There are two cases: (1) maps to with low density in . This case can be easily detected because is easily computable. In this case the second term in Eq. 9 is a large negative number. (2) maps to with high density in . Then since we train the decoder network with the input distribution , so is an in-distribution sample for the decoder . Thus should maps to some in-distribution with class label . Since input is an OOD sample and reconstruction is an in-distribution sample, so the reconstruction has to be bad. In this case, the first term in Eq. 9 is a large negative number. So in both cases, the log-likelihood score derived from DVN should be a large negative number. This is why our model is robust to both adversarial and OOD samples.

3.6 Using Density Estimators other than VAEs

In theory, we can also use density estimators other than conditional VAE (such as auto-regressive models and flow-based models) to estimate . However, all these models have drawbacks that make them not suitable for this task. Auto-regressive models are quite slow and prone to completely ignore the conditioned code [2]. Flow-based models are not robust to adversarial examples thus sometimes assigns higher likelihood on OOD samples than in-distribution samples [24]. We have intuitively explained in Sec. 3.5 why our cVAE based model does not suffer from the same problem as flow-based models.

4 Experimental results

In this section, we demonstrate the effectiveness of our DVN on several classification benchmarks, and show its potential for image caption task. We choose the DenseNet [16] and ResNet [12] as the backbone of our classifier.

For evaluation, we measure the following metrics as indicator of the effectiveness of the certainty scores in distinguishing in-/out-of distribution images. Following the definition in previous works, the in-distribution images are positive samples, while OOD is the negative sample. True negative rate (TNR) or false positive rate (FPR) at 95% true positive rate (TPR). Let TP, TN, FP, and FN denote true positive, true negative, false positive and false negative, respectively. We measure TNR = TN / (FP+TN) or FPR = FP / (FP+TN), when TPR = TP / (TP+FN) is 95%.

Area under the receiver operating characteristic curve (

AUROC). The ROC curve is a graph plotting TPR against the false positive rate = FP / (FP+TN) by varying a threshold. The area under ROC is the probability that an in-distribution sample has a higher certainty score than an OOD sample. Area under the precision-recall curve (AUPR). The PR curve is a graph plotting the precision = TP / (TP+FP) against recall = TP / (TP+FN) by varying the threshold. Verification accuracy is defined by , where is the predicted certainty score, or is the probability of appearing of positive or negative samples in the test set. It corresponds to the maximum classification probability over all possible thresholds.

Following the definition in previous works, the in-distribution images are positive samples, while OOD is the negative sample. We note that AUROC, AUPR and verification accuracy are threshold(

)-independent evaluation metrics.

Validation on OOD samples Validation on adversarial samples
In-Dist OOD TNR@TPR 95% AUROC Verification acc. TNR@TPR 95% AUROC Verification acc.
ODIN / SUF / Our ODIN / SUF /Our
CIFAR-10 SVHN 86.2/90.8/92.4 95.5/98.1/99.0 91.4/93.9/95.1 70.5/89.6/91.2 92.8/97.6/98.1 86.5/92.6/94.2
DenseNet T-ImageN 92.4/95.0/96.2 98.5/98.8/99.0 93.9/95.0/97.3 87.1/94.9/95.6 97.2/98.8/99.1 92.1/95.0/97.4
LSUN 96.2/97.2/98.6 99.2/99.3/99.3 95.7/96.3/96.8 92.9/97.2/97.9 98.5/99.2/99.3 94.3/96.2/97.5
CIFAR-100 SVHN 70.6/82.5/85.2 93.8/97.2/97.3 86.6/91.5/93.4 39.8/62.2/70.5 88.2/91.8/92.2 80.7/84.6/86.3
DenseNet T-ImageN 42.6/86.6/89.0 85.2/97.4/97.4 77.0/92.2/93.8 43.2/87.2/89.1 85.3/97.0/97.8 77.2/91.8/93.0
LSUN 41.2/91.4/93.7 85.5/98.0/98.2 77.1/93.9/94.9 42.1/91.4/93.6 85.7/97.9/98.3 77.3/93.8/95.4
SVHN CIFAR-10 71.7/96.8/97.4 91.4/98.9/99.2 85.8/95.9/96.5 69.3/97.5/97.8 91.9/98.8/99.1 86.6/96.3/97.4
DenseNet T-ImageN 84.1/99.9/100 95.1/99.9/99.9 90.4/98.9/99.2 79.8/99.9/99.9 94.8/99.8/99.9 90.2/98.9/99.4
LSUN 81.1/100/100 94.5/99.9/99.9 89.2/99.3/99.6 77.1/100/100 94.1/99.9/100 89.1/99.2/99.5
CIFAR-10 SVHN 86.6/96.4/98.4 96.7/99.1/99.2 91.1/95.8/97.3 40.3/75.8/78.5 86.5/95.5/96.1 77.8/89.1/92.2
ResNet T-ImageN 72.5/97.1/98.0 94.0/99.5/99.6 86.5/96.3/96.9 96.6/95.5/97.1 93.9/99.0/99.2 86.0/95.4/96.3
LSUN 73.8/98.9/99.0 94.1/99.7/99.7 86.7/97.7/97.9 70.0/98.1/98.9 93.7/99.5/99.5 85.8/97.2/98.0
CIFAR-100 SVHN 62.7/91.9/93.5 93.9/98.4/98.8 88.0/93.7/94.8 12.2/41.9/46.2 72.0/84.4/86.3 67.7/76.5/79.4
ResNet T-ImageN 49.2/90.9/91.2 87.6/98.2/98.5 80.1/93.3/94.3 33.5/70.3/74.6 83.6/87.9/90.3 75.9/84.6/89.8
LSUN 45.6/90.9/92.3 85.6/98.2/98.6 78.3/93.5/95.7 31.6/56.6/63.5 81.9/82.3/85.2 74.6/79.7/81.9
SVHN CIFAR-10 79.8/98.4/99.4 92.1/99.3/99.9 89.4/96.9/97.5 79.8/94.1/94.5 92.1/97.6/98.7 89.4/94.6/94.8
ResNet T-ImageN 82.1/99.9/100 92.0/99.9/99.9 89.4/99.1/99.2 80.5/99.2/99.7 92.9/99.3/99.5 90.1/98.8/99.3
LSUN 77.3/99.9/99.9 89.4/99.9/99.9 87.2/99.5/100 76.3/99.9/99.9 90.7/99.9/99.8 88.2/99.5/99.8
Table 1: OOD verification results of image classification under different validation setups. All metrics are percentages and the best results are bolded. The ResNet for SUF [20] and our is ResNet34 [12], while ODIN [21] use more powerful wide ResNet 40 with width 4 [35].

4.1 Verifying out-of-distribution samples for classification

Datasets. The Street View Housing Numbers (SVHN) dataset [25] consists of color images depicting house numbers, which range from 0 to 9. Images have a resolution of 3232. For our tests, we use the official training set split which contains 73,257 images, and the test set split, which has 26,032 images. The CIFAR-10/100 dataset [17] consists of 10/100 classes colour images. The training set has 50,000 images, while the test set has 10,000 images. The TinyImageNet dataset111https://tiny-imagenet.herokuapp.com/ is a subset of the ImageNet dataset [6]. Its test set contains 10,000 images from 200 different classes. It contains the original images, downsampled to 32

32 pixels. The Large-scale Scene UNderstanding dataset (

LSUN) [34] has a test set with 10,000 images from 10 different classes. The LSUN (crop) and LSUN (resize) are created in a similar downsampling manner to the TinyImageNet datasets. The Uniform noise and Gaussian noise dataset are with 10,000 samples respectively, which are generated by drawing each pixel in a 32

32 RGB image from an i.i.d uniform distribution of the range [0, 1] or an i.i.d Gaussian distribution with a mean of 0.5 and variance of 1

[21].

Setups. For fair comparisons, the backbones of classifier used here are the 100-layer DenseNet with growth rate 12 [21, 20] and 34-layer ResNet [20]. They are trained to classify the SNHN, CIFAR-10, CIFAR-100 and Tiny-ImageNet datasets, of which test set is regarded as the in-distribution dataset in our testing stage. The dataset different from its training dataset is considered as OOD. We use four conv or deconv layer for the encoder and decoder structure, and

is a 128-dimension vector. In classification setting,

is a one-hot vector. Discriminator is a two-layer fully connected layer network with sigmoid output and binary cross-entropy loss. The hyper-parameters in previous methods [21, 20] need to be tuned on a validation set with 1,000 images from each in-distribution and OOD pair. Noticing that the threshold of DVN is tuned on in-distribution only. This aligns a more realistic scenario, since the OOD in real-world applications is usually uncontrollable.

Figure 3: FPR (for OOD) and TPR (for ID) under different when using CIFAR-10 as the in-distribution dataset, and use Tiny-ImageNet(resize), LSUN and Gaussian/Uniform noise as OOD. CIFAR-10 only applicable to the TPR which use the dashed red line and indicated by the right axis while the other OOD datasets use the left FPR axis.

Effects of the threshold and performance across datasets. How the hyper-parameters ( ) generalize across different OOD datasets is a challenging aspect of the system deployment. The most of previous methods target at the case that we have a small set of OOD samples, the can be calibrated by evaluating the verification error at different . However, the more realistic scenario is that we do not have access to the OOD examples that sampled from the OOD in testing stage.

A promising trend is improving the performance on an unknown OOD when using the model tuned on a similar OOD [21, 20]. We argue that our DVN is essentially free from such worries, since it does not need any OOD sample in the validation.

To investigate how the threshold affects the FPR and TPR, Fig. 3 shows their relationship when the CIFAR-10 is used for training and meet different OOD in test stage with DenseNet backbone. Noticing that the TPR (red axis) is used for in-distribution dataset CIFAR-10 (red dashed line), while FPR is used for OODs. We can observe that the threshold corresponding to 95% TPR can produce small FPRs on all OOD datasets. When the OOD images are sampled from some simple distributions ( Gaussian or Uniform), the available window of threshold can be larger.

Comparison with SOTA. The main results are summarised in Table. red1. For each in&out-of-distribution pair, we report the performance of ODIN [21], SUF [20] and our DVN. Notably, DVN consistently outperforms the previous methods and achieves a new state-of-the-art. As shown in Table. red2 that the pre-processing and model change in ODIN and SUF can unavoidably increase the error rate of the original classification for in-distribution test, while DVN does not affect the classification performance.

Figure 4: Comparison with baseline MSP [13] using DenseNet, with Tiny-ImageNet as in-distribution and LSUN as OOD.
CIFAR-10 CIFAR-100
ODIN/SUF 4.81 22.37
DenseNet/DVN 4.51 22.27
Table 2: Test error rate of classification on CIFAR-10/100 using DenseNet as backbone. Our DVN does not re-train or modify the structure of the original trained classifier.

Considering the technical route of DVN is essentially different from ODIN and SUF, we compare it with the baseline, maximum softmax probability (MSP) [13], w.r.t. ROC and PR in Fig. 4. DVN shares some nice properties of MSP, fixed classifier and single forward pass at the test stage, while DVN outperforms MSP by a large margin.

Disentangle TNR@TPR95% AUROC
98.4 99.2
- 62.6 84.7
Table 3: The performance of DVN w/o disentanglement of from with ResNet backbone, and using CIFAR-10/SVHN as in-distribution/OOD, respectively.

Ablation study. Disentangle from is critical to our model. Table red3 validates the contribution of this manipulation w.r.t. both threshold dependent and independent metrics. One can easy to see that the DVN with disentanglement can outperforms its counterparts which without disentanglement significantly. This also implies DVN has successfully learned to minimize the MI of and .

Since modeling is the core of DVN, we cannot remove . Here, we give another ablation study that without modify with . The results are shown in Table red4.

TNR@TPR95% AUROC
98.4 99.2
- 95.3 96.7
Table 4: The performance of DVN w/o replace with . We use ResNet backbone, and choose CIFAR-10/SVHN as in-distribution/OOD.
Backbone Dataset Method Negative samples Input DeepFool CW BIM
for training Pre-processing

KD+PU FGSM - 68.34 53.21 3.10
CIFAR-10 LID FGSM - 70.86 71.50 94.55
SUF FGSM Yes 87.95 83.42 99.51
Our - - 90.14 86.38 99.42



KD+PU FGSM - 65.30 58.08 66.86
DenseNet CIFAR-100 LID FGSM - 69.68 72.36 68.62
SUF FGSM Yes 75.63 86.20 98.27
Our - - 80.01 88.55 99.04

KD+PU FGSM - 84.38 82.94 83.28
SVHN LID FGSM - 80.14 85.09 92.21
SUF FGSM Yes 93.47 96.95 99.12
Our - - 94.14 97.35 99.12

KD+PU FGSM - 76.80 56.30 16.16
CIFAR-10 LID FGSM - 71.86 77.53 95.38
SUF FGSM Yes 78.06 93.90 98.91
Our - - 82.45 95.51 99.07


KD+PU FGSM - 57.78 73.72 68.85
ResNet CIFAR-100 LID FGSM - 63.15 75.03 55.82
SUF FGSM Yes 81.95 90.96 96.38
Our - - 85.22 93.38 97.72


KD+PU FGSM - 84.30 67.85 43.21
SVHN LID FGSM - 67.28 76.58 84.88
SUF FGSM Yes 72.20 86.73 95.39
Our - - 86.13 89.38 96.10
Table 5: Comparison of AUROC (%) under different validation setups. The best results are bolded.

4.2 Verifying adversarial samples

Deal with popular adversarial attackers. To detect the adversarial samples, we train our DenseNet and ResNet-based classification network and DVN using the training set of CIFAR-10, CIFAR-100 or SVHN datasets, and their corresponding test sets are used as the positive samples for the test. Following the setting in [20], we applied several attack methods to generate the negative samples, such as basic iterative method (BIM) [18], Deepfool [23], Carlini-Wangner (CW) [4]. The network structures are the same as OOD verification.

We compare DVN with the strategies in KD+PU [8], LID [22], SUF [20] in Table red5, and show that the DVN can achieve the the state-of-the-art performance in most cases w.r.t. AUROC. Following the ”detection of unknown attack setting”, we can not access to the adversarial examples as the test stage in the training or validation. Therefore, the previous works choose to use another attack generation method, fast gradient sign method (FGSM) [10], to construct a validation set of adversarial sample. In here, we do not need another attack method as a reference, since the threshold of DVN only related to the validation set of in-distribution samples. Moreover, the pre-processing and model change as in [20] are not required in DVN.

Figure 5: Distinguishing clean and garbage samples using ResNet 18 layers trained on CIFAR-10 dataset (as OOD) and compare with the state-of-the-art methods w.r.t. TNR at TPR95%, AUROC, and detection accuracy.

Deal with adaptive attackers be aware of deep verifier In fact, our deep verifier cannot be fooled by white-box adversarial attacks. The reason is a bit involved. Assume an adversarial example from class but is misclassified as . The decoder takes two inputs, the label and the latent variable . Adversarial samples are generated by modifying inputs of classifiers to another sample that looks very similar but is not from the training distribution. However, in our verifier model, we have perfect knowledge about the input distribution of the decoder network, so no adversarial attacks can fool the decoder network without being detected (e.g. with a low density). Although the encoder network can be fooled with , fooling the encoder network alone is not enough to fool the entire verifier, because the reconstruction of the decoder (which is an image from class ) cannot match the input of the encoder (which is an image from class ) in this case. The verifier will output a low-likelihood even if the encoder is fooled by adversarial attacks, and there is no way to fool the decoder network. This is in fact a core reason why our method works.

Deal with spatially transformed Adv and unrestricted Adv The recently developed spatially transformed adversarial examples [32] and unrestricted Adv [30] are not essentially different from normalized attackers for DVN. If the attack can successfully fool classifier and predict , the conditional reconstruction will be unrealistic and has a small likelihood. Even spatial transformation tends to better preserve the perceptual quality, its conditional generation with cannot keep this. Following Sec 4.2, using DenseNet and CIFAR-10 as in-distribution, the AUROC(%) of [32] and [30] are 84.2 and 87.8 respectively.

4.3 Verifying out-of-distribution samples for image caption

For verifying OOD in image caption task, we choose Oxford-102 and CUB-200 as the in-distribution datasets. Oxford-102 contains 8,189 images of 102 classes of flower. CUB-200 contains 200 bird species with 11,788 images. Each of them has 10 descriptions that are provided by [28]. For these two datasets, we use 80% samples to train our captioner, and the remaining 20% for testing in a cross-validation manner. The LSUN and Microsoft COCO dataset are used as our OOD.

The captioner used in here is a classical image caption model [33]. We choose the generator of GAN-INT-CLS [29]

as our decoder’s backbone, and replace its Normal distribution vector as the output of encoder

. A character level CNN-RNN model [28] is used for the text embedding which produces the 1,024-dimension vector given the description, and then projected to a 128-dimension code . We configure the encoder and decoder with four convolutional layers and the latent vector is a 100-dimension vector. The input of discriminator is the concatenation of and , which result in a 228-dimension vector. A two-layer fully connected network with sigmoid output unit is used as the discriminator. Table red6 summarizes the performance of DVN in image caption task and can be regarded as a powerful baseline.

In-Dist OOD Validation on OOD samples
TNR@TPR 95% AUROC Verif acc.
CUB-200 55.6 72.3 79.5
Oxford-102 LSUN 50.5 71.8 76.2
COCO 40.3 74.4 73.3
Oxford-102 39.8 68.4 72.5
CUB-200 LSUN 36.3 65.4 69.5
COCO 35.4 60.7 71.0
Table 6: OOD verification results of image caption under different validation setups. We use CUB-200, LSUN and COCO as the OOD of Oxford-102, while using Oxford-102, LSUN and COCO as OOD of CUB-200.

5 Conclusion and Future Works

In this paper, we propose to enhance the performance of anomaly detection by verifying predictions of deep discriminative models using deep generative models. The idea is to train a conditional verifier network as an approximation to the inverse posterior distribution. We propose our model Deep Verifier Networks (DVN) which is based on conditional variational auto-encoders with disentanglement constraints. We show our model is able to achieve state-of-the-art performance on benchmark OOD detection and adversarial example detection tasks.

For future work, it would be interesting to integrate DVN to safe AI systems. For instance, ordinary image classifiers such as DenseNet have perfect accuracy for in-distribution queries, but their behaviors are undefined on adversarial queries. Robust image classifier sacrifices some accuracy for robustness to adversarial examples. We can use DVN to form a two step prediction procedure: first using an ordinary classifier to get an initial prediction, then verify the prediction with DVN. If the prediction does not pass the verification, we switch to the robust image classifier. We believe our method would provide some illuminations for AI safety systems.

References

  • [1] D. Amodei, C. Olah, J. Steinhardt, P. Christiano, J. Schulman, and D. Mané (2016) Concrete problems in ai safety. arXiv preprint arXiv:1606.06565. Cited by: §1, §1.
  • [2] S. R. Bowman, L. Vilnis, O. Vinyals, A. M. Dai, R. Jozefowicz, and S. Bengio (2015) Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349. Cited by: §3.6.
  • [3] Y. Burda, R. Grosse, and R. Salakhutdinov (2015) Importance weighted autoencoders. arXiv preprint arXiv:1509.00519. Cited by: §3.3.
  • [4] N. Carlini and D. Wagner (2017) Adversarial examples are not easily detected: bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 3–14. Cited by: §4.2.
  • [5] H. Choi, E. Jang, and A. A. Alemi (2018) WAIC, but why? generative ensembles for robust anomaly detection. arXiv preprint arXiv:1810.01392. Cited by: §2.
  • [6] J. Deng, W. Dong, R. Socher, L. Li, K. Li, and L. Fei-Fei (2009) Imagenet: a large-scale hierarchical image database. In

    2009 IEEE conference on computer vision and pattern recognition

    ,
    pp. 248–255. Cited by: §4.1.
  • [7] T. Devries and G. W. Taylor (2018) Learning confidence for out-of-distribution detection in neural networks. Cited by: §2.
  • [8] R. Feinman, R. R. Curtin, S. Shintre, and A. B. Gardner (2017) Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410. Cited by: §4.2.
  • [9] I. Goodfellow, Y. Bengio, and A. Courville (2016) Deep learning. MIT press. Cited by: §1.
  • [10] I. J. Goodfellow, J. Shlens, and C. Szegedy (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §4.2.
  • [11] C. Guo, G. Pleiss, Y. Sun, and K. Q. Weinberger (2017) On calibration of modern neural networks. In

    Proceedings of the 34th International Conference on Machine Learning-Volume 70

    ,
    pp. 1321–1330. Cited by: §1.
  • [12] K. He, X. Zhang, S. Ren, and J. Sun (2016) Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770–778. Cited by: Table 1, §4.
  • [13] D. Hendrycks and K. Gimpel (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR. Cited by: §1, §1, §2, Figure 4, §4.1.
  • [14] Hendrycks (2019)

    Deep anomaly detection with outlier exposure

    .
    ICLR. Cited by: §1, §2.
  • [15] R. D. Hjelm, A. Fedorov, S. Lavoie-Marchildon, K. Grewal, A. Trischler, and Y. Bengio (2018) Learning deep representations by mutual information estimation and maximization. arXiv preprint arXiv:1808.06670. Cited by: §3.2.
  • [16] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger (2017) Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 4700–4708. Cited by: §4.
  • [17] A. Krizhevsky and G. Hinton (2009) Learning multiple layers of features from tiny images. Technical report Citeseer. Cited by: §4.1.
  • [18] A. Kurakin, I. Goodfellow, and S. Bengio (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §4.2.
  • [19] K. Lee, H. Lee, K. Lee, and J. Shin (2018) Training confidence-calibrated classifiers for detecting out-of-distribution samples. ICLR. Cited by: §2.
  • [20] K. Lee, K. Lee, H. Lee, and J. Shin (2018) A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NIPS. Cited by: §1, §2, §3.3, §4.1, §4.1, §4.1, §4.2, §4.2, Table 1.
  • [21] S. Liang, Y. Li, and R. Srikant (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR. Cited by: §1, §2, §2, §2, §3.3, §4.1, §4.1, §4.1, §4.1, Table 1.
  • [22] X. Ma, B. Li, Y. Wang, S. M. Erfani, S. Wijewickrema, G. Schoenebeck, D. Song, M. E. Houle, and J. Bailey (2018) Characterizing adversarial subspaces using local intrinsic dimensionality. arXiv preprint arXiv:1801.02613. Cited by: §4.2.
  • [23] S. Moosavi-Dezfooli, A. Fawzi, and P. Frossard (2016) Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582. Cited by: §4.2.
  • [24] E. Nalisnick, A. Matsukawa, Y. W. Teh, D. Gorur, and B. Lakshminarayanan (2018) Do deep generative models know what they don’t know?. arXiv preprint arXiv:1810.09136. Cited by: §3.6.
  • [25] Y. Netzer, T. Wang, A. Coates, A. Bissacco, B. Wu, and A. Y. Ng (2011) Reading digits in natural images with unsupervised feature learning. Cited by: §4.1.
  • [26] L. O. Nunes (2018)

    Detecting out-of-distribution samples using low-order deep features statistics

    .
    Openreview. Cited by: §2.
  • [27] M. A. F. Pimentel, D. A. Clifton, C. Lei, and L. Tarassenko (2014)

    A review of novelty detection

    .
    Signal Processing 99 (6), pp. 215–249. Cited by: §2.
  • [28] S. Reed, Z. Akata, H. Lee, and B. Schiele (2016) Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 49–58. Cited by: §4.3, §4.3.
  • [29] S. Reed, Z. Akata, X. Yan, L. Logeswaran, B. Schiele, and H. Lee (2016) Generative adversarial text to image synthesis. arXiv preprint arXiv:1605.05396. Cited by: §4.3.
  • [30] Y. Song, R. Shu, N. Kushman, and S. Ermon (2018) Constructing unrestricted adversarial examples with generative models. In Advances in Neural Information Processing Systems, pp. 8312–8323. Cited by: §4.2.
  • [31] A. Vyas, N. Jammalamadaka, X. Zhu, D. Das, and T. L. Willke (2018) Out-of-distribution detection using an ensemble of self supervised leave-out classifiers. ECCV. Cited by: §2, §2.
  • [32] C. Xiao, J. Zhu, B. Li, W. He, M. Liu, and D. Song (2018) Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612. Cited by: §4.2.
  • [33] K. Xu, J. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhutdinov, R. Zemel, and Y. Bengio (2015) Show, attend and tell: neural image caption generation with visual attention. arXiv preprint arXiv:1502.03044. Cited by: §4.3.
  • [34] F. Yu, A. Seff, Y. Zhang, S. Song, T. Funkhouser, and J. Xiao (2015) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §4.1.
  • [35] S. Zagoruyko and N. Komodakis (2016) Wide residual networks. arXiv preprint arXiv:1605.07146. Cited by: Table 1.