|Clean Images||Adversarial Images|
|Original Network||Original Network||Proposed Method|
|Closed Set Accuracy||96.0||31.8||88.2|
|Open-set Detection (AUC-ROC)||81.2||51.5||79.1|
A significant improvement has been achieved in the image classification task since the advent of deep convolutional neural networks (CNNs)
. The promising performance in classification has contributed to many real-world computer vision applications[40, 36, 39, 37, 38, 41, 42, 45, 47, 20, 1, 46]. However, there exist several limitations of conventional CNNs that have an impact in real-world applications. In particular, open-set recognition [2, 9, 25, 30, 48, 32, 31, 33, 50] and adversarial attacks [11, 24, 4, 19, 44] have received a lot of interest in the computer vision community in the last few years.
A classifier is conventionally trained assuming that classes encountered during testing will be identical to classes observed during training. But in a real-world scenario, a trained classifier is likely to encounter open-set samples from classes unseen during training. When this is the case, the classifier will erroneously associate a known-set class identity to an open-set sample. Consider a CNN trained on animals classes. Given an input that is from an animal class (such as a cat), the network is able to produce the correct prediction as shown in Figure 1(a-First Row). However, when the network is presented with a non-animal image, such as an Airplane image, the classifier wrongly classifies it as one of the known classes as shown in Figure 1(a-Second Row). On the other hand, it is a well known fact that adding carefully designed imperceptible perturbations to clean images can alter model prediction in a classifier . These types of adversarial attacks are easy to deploy and may be encountered in real-world applications . In Figure 1(a-Third Row) and Figure 1(a-Fourth Row), we show how such adversarial attacks can affect model prediction for known and open-set images, respectively.
Computer vision community has developed several open-set recognition algorithms [2, 9, 25, 30, 48] to combat against the former challenge. These algorithms convert the -class classification problem into a class problem by considering open-set classes as an additional class. These algorithms provide correct classification decisions for both known and open-set classes as shown in Figure 1(b-First and Second rows). However, in the presence of adversarial attacks, these models fail to produce correct predictions as illustrated in Figure 1(b-Third and Fourth rows). On the other hand, there exist several defense strategies [19, 44, 22, 17] that are proposed to counter the latter challenge. These defense mechanisms are designed with the assumption of closed-set testing. Therefore, although they work well when this assumption holds (Figure 1(c-First and third rows)), they fail to generalize well in the presence of open-set samples as shown in Figure 1(c-Second and Fourth rows).
Based on this discussion, it is evident that existing solutions in the open-set recognition paradigm does not necessarily complement well with adversarial defense and vise versa. This observation motivates us to introduce a new research problem – Open-Set Adversarial Defense (OSAD), where the objective is to simultaneously detect open-set samples and classify known classes in the presence of adversarial noise. In order to demonstrate the significance of the proposed problem, we conducted an experiment on the CIFAR10 dataset by considering only 6 classes to be known to the classifier. In Table 1 we tabulate both open-set detection performance (expressed in terms of area under the curve of the ROC curve) and closed-set classification accuracy for this experiment. When the network is presented with clean images, it produces a performance better than 0.8 in both open-set detection and closed set classification. However, as evident from Table 1, when images are attacked, open-set detection performance drops along with the closed set accuracy by a significant margin. It should be noted that, open-set detection performance in this case is close to random guessing .
This paper proposes an Open-Set Defense Network (OSDN) that learns a noise-free, informative latent feature space with the aim of detecting open-set samples while being robust to adversarial images. We use an autoencoder network with a classifier branch attached to its latent space as the backbone of our solution. The encoder network is equipped with feature-denoising layers with the aim of removing adversarial noise. We use self-supervision and decoder reconstruction processes to make sure that the learned feature space is informative enough to detect open-set samples. The reconstruction process uses the decoder to generate noise-free images based on the obtained latent features. Self-supervision is carried out by forcing the network to perform an auxiliary classification task based on the obtained features. The proposed method is able to provide robustness against adversarial attacks in terms of classification as well as open-set detection as shown in Table 1. Main contributions of our paper can be summarized as follows:
1. This paper proposes a new research problem named Open-Set Adversarial Defense (OSAD) where adversarial attacks are studied under an open-set setting.
2. We propose an Open-Set Defense Network (OSDN) that learns a latent feature space that is robust to adversarial attacks and informative to identify open-set samples.
3. A test protocol is defined to the OSAD problem. Extensive experiments are conducted on three publicly available image classification datasets to demonstrate the effectiveness of the proposed method.
2 Related Work
Adversarial Attack and Defense Methods. Szegedy et al.  reported that carefully crafted imperceptible perturbations can be used to fool a CNN to make incorrect predictions. Since then, various adversarial attacks have been proposed in the literature. Fast Gradient Sign Method (FGSM)  was proposed to consider the sign of a gradient update from the classifier to generate adversarial images. Basic Iteration Method (BIM)  and Projected Gradient Descent (PGD)  extended FGSM to stronger attacks using iterative gradient descent. Different from the above gradient-based adversarial attacks, Carlini and Wagner  proposed the CW attack to generate adversarial samples by taking a direct optimization approach. Adversarial training  is one of the most widely-used adversarial defense mechanisms. It provides defense against adversarial attacks by training the network on adversarially perturbed images generated on-the-fly based on model’s current parameters. Several recent works have proposed denoising-based operations to further improve adversarial training. Pixel denoising  was proposed to exploit the high-level features to guide the denoising process. The most influential local parts to conduct the pixel-level denoising is found in  based on class activation map responses. Adversarial noise removal is carried out in the feature-level using denoising filters in . Effectiveness of this process is demonstrated using a selection of different filters.
Possibility for open-set samples to generate very high probability scores in a closed-set classifier was first brought to attention in. It was later shown that deep learning models are also affected by the same phenomena . Authors in  proposed a statistical solution, called OpenMax, for this problem. They converted the -class classification problem into a
problem by considering the extra class to be the open-set class. They apportioned logits of known classes to the open-set class considering spatial positioning of a query sample in an intermediate feature space. This formulation was later adopted by and  by using a generative model to produce logits of the open-set class. Authors in  argued that a generative feature contain information that can benefit open-set recognition. On these grounds they considered a concatenation of a generative feature and a classifier feature when building the OpenMax layer. A generative approach was used in  where a class conditioned decoder was used to detect open-set samples. Works of both  and 
show that incorporating generative features can benefit open-set recognition. Note that open-set recognition is more challenging than the novelty detection[27, 28, 31, 34, 29] which only aims to determine whether an observed image during inference belongs to one of the known classes.
Self-supervision is an unsupervised machine learning technique where the data itself is used to provide supervision. Recent works in self-supervision introduced several techniques to improve the performance in classification and detection tasks. For example, in, given an anchor image patch, self-supervision was carried out by asking the network to predict the relative position of a second image patch. In , a multi-task prediction framework extended this formulation, forcing the network to predict a combination of relative order and pixel color. In , the image was randomly rotated by a factor of 90 degrees and the network was forced to predict the angle of the transformed image.
Adversarial Attacks. Consider a trained network parameterized by parameters . Given a data and label pair , an adversarial image , can be produced using , where can be determined by a given white-box attack based on the models parameters. In this paper, we consider two types of adversarial attacks.
The first attack considered is the Fast Gradient Signed Method (FGSM)  where the adversarial images are formed as follows,
where is a classification loss. denotes the projection of its element to a valid pixel value range, and denotes the size of -ball. The second attack considered is Projective Gradient Descent (PGD) attacks . Adversarial images are generated in this method as follows,
where denotes the projection of its element to -ball and a valid pixel value range, and denotes a step size smaller than . We use the adversarial samples of the final step : .
OpenMax Classifier. A SoftMax classifier trained for a -class problem typically has probability predictions. OpenMax is an extension where the probability scores of classes are produced. The probability of the final class corresponds to the open-set class. Given known classes , OpenMax is designed to identify open-set samples by calibrating the final hidden layer of a classifier as follows:
denotes the logit vector obtained prior to the SoftMax operation in a classifier,represents the belief that belongs to the known class . Here, the class corresponds to the open-set class. Belief is calculated considering the distance of a given sample to it’s class mean in an intermediate feature space. During training, distance of all training image samples from a given class to its corresponding class mean is evaluated to form a matched score distribution. Then, a Weibull distribution is fitted to the tail of the matched distribution. If the feature representation of the input in the same feature space is v(x), is calculated as
where are parameters of the Weibull distribution that corresponding to class .
is hyperparameter andis the index in the logits sorted in the descending order.
4 Proposed Method
The proposed network consists of four CNNs: encoder, decoder, open-set classifier and transformation classifier. In Figure 2, we illustrate the network structure of the proposed method and denote computation flow. The encoder network consists of several feature-denoising layers  between the convolutional layers. Open-set classifier has no structural difference from a regular classifier. However, an OpenMax layer is added to the end of the classifier during inference. We denote this by indicating an OpenMax layer in Figure 2.
Given an input image, first the network generates an adversarial image based on the current network parameters. This image is passed through the encoder network to obtain the latent feature. This feature is passed through the open-set classifier via path (1) to evaluate the cross entropy loss . Then, the image corresponding to the obtained latent feature is generated by passing the feature through the decoder following path (2). The decoded image is used to calculate its difference to the corresponding clean image based on the reconstruction loss . Finally, the input image is subjected to a geometric transform. An adversarial image corresponding to the transformed image is obtained. This image is passed through path (3) to arrive at the transformation classifier. Output of the classifier is used to calculate the cross entropy loss
considering the transform applied to the image. The network is trained end-to-end using the following loss function
In the following subsections, we describe various components and computation involved in all three paths in detail.
Noise-free Feature Encoding. The proposed network uses an encoder network to produce noise-free features. Then, the open-set classifier operating on the learned feature is used to perform classification. During training, there is no structural difference in the open-set classifier from a standard classifier. Inspired by , we embed denoising layers after the convolutional layer blocks in the encoder so that feature denoising can be explicitly carried out on adversarial samples. We adopt the Gaussian (softmax) based non-local means filter  as the denoising layer. Given an input feature map , non-local means  takes a weighted mean of features in the spatial region to compute a denoised feature map as follows
where is a feature-dependent weighting function. For the Gaussian (softmax) based version, . and are two convolutional layers as embedding functions and corresponds to the number of channels. is a normalization function and .
Formally, we denote the encoder embedded with denoising layers as parameterized by , and the classifier as parameterized by . Given the labeled clean data from the data distribution of known classes, we can generate the corresponding adversarial images on-the-fly using either FGSM or PGD attacks based on the current parameters , using the true label y. Obtained adversarial image is passed through encoder and classifier (via path (1)) to arrive at the cross-entropy loss defined as
By minimizing the above adversarial training loss, the trained encoder embedded with the denoising layers is able to learn a noise-free latent feature space. During inference, an OpenMax layer is added on top of the classifier. With this formulation, open-set classifier operating on the noise-free latent feature learns to predict the correct class, for both known and open-set samples, even when the input is contaminated with adversarial noise.
Clean Image Generation. In this section, we introduce the image generation branch proposed in our method. The objective of the image generation branch is to generate noise-free images from adversarial images by taking advantage of the decoder network. This is motivated by two factors.
First, autoencoders are widely used in the literature for image denoising applications. By forcing the autoencoder network to produce noise-free images, we are providing additional supervision to remove noise in the latent feature space. Secondly, it is a well known fact that open-set recognition becomes more effective in the presence of more descriptive features . When a classifier is trained, it models the boundary of each class. Therefore, a feature produced by a classification network only contains information that is necessary to model class boundaries. However, when the network is asked to generate noise-free images based on the latent representations, it ends up with learning generative features. As a result, features become more descriptive than in the case of a pure classifier. In fact, such generative features are used in  and  to boost the open-set recognition performance. Therefore, we argue that adding the decoder as an image generation branch can mutually benefit both open-set recognition and adversarial defense.
We pass adversarial images through path (2) as illustrated in Figure 2 to generate the decoded images. The decoder network denoted as parameterized by and the encoder network are jointly optimized to minimize the distance between the decoded images and the corresponding clean images using the following loss
Self-supervised Denoising. Finally, we propose to use self-supervision as a means to further increase the informativeness and robustness of the latent feature space. Self-supervision is a machine learning technique that is used to learn representations in the absence of labeled data. In our work we adopt rotation-based self-supervision proposed in . In , first, a random rotation from a finite set of possible rotations is applied to an image. Then, a classifier is trained on top of a latent feature vector to automatically recognize the applied rotation.
In our approach, similar to , we first generate a random number as the rotation ground-truth and transform the input clean image x by rotating it with degrees. Then, based on the rotated clean image, we generate a rotated adversarial image on-the-fly using either FGSM or PGD attack based on the current network parameters and rotation ground-truth , which is passed through the transformation classifier to generate the cross-entropy loss with respect to the ground-truth . We denote the transformation classifier as parameterized by and formulate the adversarial training loss function for self-supervised denoising as follows
There are multiple reasons why we use self-supervision in our method. When a classifier learns to differentiate between different rotations, it learns to pay attention to object structures and orientations of known classes. As a result, when self-supervision is carried out in addition to classification, the underlying feature space learns to represent additional information that was not considered in the case of a pure classifier. Therefore, self-supervision enhances the informativeness of the latent feature space which would directly benefit the open-set recognition process. On the other hand, since we use adversarial images for self-supervision, this operation directly contributes towards learning the denoising operator in the encoder. It should also be noted that recent work 
has found that self-supervised learning contributes towards robustness against adversarial samples. Therefore, we believe that addition of self-supervision benefits both open-set detection and adversarial defense processes.
Implementation Details. We adopt the structure of Resnet-18 , which has four main blocks, for the encoder network. Denoising layers are embedded after each main blocks in the encoder. For the decoder, we use the decoder network proposed in 
with three transpose-convolution layers for conducting experiments with the SVHN and CIFAR10 dataset. Four transpose-convolution layers are used for conducting experiments with the TinyImageNet dataset. For both open-set classifier and transformation classifier, we use a single fully connected layers. We use the Adam optimizer for the optimization with a learning rate of 1e-3. We carried out model selection considering the trained model that has produced the best closed-set accuracy on the validation set. We use the iteration = 5 for the PGD attacks and for the FGSM attacks in both adversarial training and testing.
5 Experimental Results
In order to assess the effectiveness of the proposed method, we carry out experiments on three multiple-class classification datasets. In this section, we first describe datasets, baseline methods and the protocol used in our experiments. We evaluate our method and baselines in the task of open-set recognition under adversarial attacks. To further validate the effectiveness of our method, additional experiments in the task of out-of-distribution detection under adversarial attacks are conducted. We conclude the section by presenting an ablation study and various visualizations with a brief analysis of the results.
The evaluation of our method and other state-of-the-arts are conducted on three standard images classification datasets for open-set recognition:
SVHN and CIFAR10. Both CIFAR10  and SVHN  are classification datasets with 10 classes with images of size 3232. Street-View House Number dataset (SVHN) contains house number signs extracted from Google Street View. CIFAR10 contains images from four vehicle classes and six animal classes. We randomly split 10 classes into 6 known classes and 4 open-set classes to simulate open-set recognition scenario. We consider three randomly selected splits for testing111Details about known classes present in each split can be found in supplementary materials..
5.2 Baseline Methods.
We consider the following two recently proposed adversarial defense methods as baselines: Adversarial Training  and Feature Denoising . We add an OpenMax layer in the last hidden layer during testing for both baselines to facilitate a fair comparison in open-set recognition. Moreover, to evaluate the performance of a classifier without a defense mechanism, we train a Resnet-18 network on clean images obtained from known classes and add an OpenMax layer during testing. We test this network using clean images for inference and we denote this test case by clean. Furthermore, we test this model with adversarial images, which is denoted as adv on clean.
5.3 Quantitative Results
|adv on clean||41.63.2||39.31.8||31.84.5||13.04.0||11.22.6||4.40.8|
Open-set Recognition. In conventional open-set recognition, the model is required to perform two tasks. First, it should be able to detect open-set samples effectively. Secondly, it should be able to perform correct classification on closed set samples. In order to evaluate the open-set defense performance, we take these two factors into account. In particular, following previous open-set works , we use area under the ROC curve (AUC-ROC) to evaluate the performance on identifying open-set samples under adversarial attacks. In order to evaluate the closed-set accuracy, we calculate prediction accuracy by only considering known-set samples in the test set. In our experiments, both known and open-set samples were subjected to adversarial attacks prior to testing. During our experiments we consider FGSM and PGD attacks to attack the model. We generated adversarial samples from known classes using the ground-truth labels, while we generated the adversarial samples from open-set classes based on model’s prediction.
We tabulate the obtained performance for closed-set accuracy and open-set detection in Tables 2 and 3, respectively. We note that, networks trained on clean images produce very high recognition performance for clean images under both scenarios. However, when the adversarial noise is present, both open-set detection and closed-set classification performance drops significantly. This validates that current adversarial attacks can easily fool an open-set recognition method such as OpenMax, and thus OSAD is a critical research problem. Both baseline defense mechanisms considered are able to improve the recognition on both known and open-set samples. It can be observed from Tables 2 and 3, that the proposed method obtains the best open-set detection performance and closed-set accuracy compared to all considered baselines across all three datasets. In particular, the proposed method has achieved about improvement in open-set detection on the SVHN dataset compared to the other baselines. On other datasets, this improvement varies between . The proposed method is also able to perform better in terms of closed-set accuracy compared to the baselines consistently across datasets.
It is interesting to note that methods involving adversarial training perform better than the baseline of clean image classification under FGSM attacks on the TinyImageNet dataset. This is because only 20 classes from the TinyImageNet dataset are selected for training and each class has only 500 images. When a small dataset is used to train a model with large number of parameters, it is easier for the network to overfit to the training set. Such network observes variety of data in the presence of adversarial training. Therefore model reaches a more generalizable optimization solution during training.
|adv on clean||56.41.2||54.12.9||51.52.8||45.50.5||47.92.7||48.61.3|
Out-of-distribution detection. In this sub-section, we evaluate the performance of the proposed method on the out-of-distribution detection (OOD)  problem on CIFAR10 using the protocol described in 
. We considered all classes in CIFAR10 as known-classes and consider test images from ImageNet and LSUN dataset (both cropped and resized) as out-of-distribution images . We tested the performance of adversarial images by creating adversarial images using the PGD attacks for both known and OOD data. We generated adversarial samples from the known classes using the ground-truth labels, while we generated adversarial samples from the OOD class based on model’s prediction. We evaluated the performance of the model on adversarial samples based on macro-averaged F1 score. We used OpenMax layer with threshold when assigning open-set labels to the query images, In Table 4, we tabulate the OOD detection performance across all four cases considered for both baselines as well as the proposed method. As evident from Table 4
, the proposed method outperforms baseline methods in all test cases in the ODD task. This experiment further verifies the effectiveness of our method to identify samples from open-set classes even under the adversarial attacks.
|adv on clean||4.7||4.4||7.3||3.8|
5.4 Ablation Study
The proposed network consists of four CNN components. In this sub-section we investigate the impact of each network component to the overall performance of the system. To validate the effectiveness of various parts integrated in our proposed network, this section conducts the ablation study in our network using the CIFAR10 dataset for the task of open-set recognition. Considered cases and the corresponding results obtained for each case are tabulated Table 5 (C-accuracy means closed-set accuracy). From Table 5, it can be seen that compared to normal adversarial training with an encoder, embedding the denoising layers helps to improve the open-set classification performance. Moreover, as evident from Table 5, adding a denoising layer to perform feature denoising and adding self-supervision both have resulted in improved performance. The proposed method that integrates all these components has the best performance, which shows that added components complement each other to produce better performance for both adversarial defense and open-set recognition.
|adv on clean||45.9||8.6|
5.5 Qualitative Results
In this section, we visualize the results of the denoising operation and obtained features in a 2D plane to qualitatively analyze the performance of the proposed method. For this purpose, we first consider the SVHN dataset. Figure 4 shows a set of clean images, corresponding PGD attacks adversarial images and images obtained when the latent feature is decoded under the proposed method. We have indicated known and open-set sample visualizations in two columns. From the image samples shown in Figure 4, it can be observed that image noise has been removed in both open-set and known-class images. However, the reconstruction quality is superior for the known-class samples compared to the open-set images. Reconstructions of open-set samples look blurry and structurally different. For example, the image of digit 2 shown in the first row, looks similar to the digit 9 once reconstructed.
In Figure 5 we visualize latent features obtained in the proposed method along with two other baselines using tSNE visualization  . As shown in Figure 5, most of open-set features lie away from the known-set feature distribution based on our method. This is why the proposed method is able to obtain good open-set detection performance. On the other hand, it can be observed from Figure 5 that there is more overlap between the two types of features in all baseline methods. When open-set features lie away from the data manifold of known set classes, the reconstruction obtained through the decoder network tends to be poor. Therefore, the tSNE plot justifies why the reconstruction of our method was poor for open-set samples in Figure 4. As such, Figure 4 and Figure 5 mutually verify the effectiveness of our method for defending adversarial samples and identifying open-set samples simultaneously.
Moreover, we visualize randomly selected feature maps of the second residual block from the trained Resnet-18  and the encoder of the proposed OSDN network applied on clean images and on its adversarially perturbed counterpart in the CIFAR10 dataset. From Figure 4, it can be observed that compared to Resnet-18, the proposed network is able to reduce adversarial noise significantly in feature maps of adversarial images in both known and open-set classes. This further demonstrates that the proposed network indeed carries out the feature denoising through the embedded feature denoising layers.
In this paper, we studied a novel research problem – Open-set Adversarial Defense (OSAD). We first showed that existing adversarial defense mechanisms do not generalize well to open-set samples. Furthermore, we showed that even open-set classifiers can be easily attacked using the existing attack mechanisms. We proposed an Open-Set Defense Network (OSDN) with the objective of producing a model that can detect open-set samples while being robust against adversarial attacks. The proposed network consisted of feature denoising operation, self-supervision operation and a denoised image generation function. We demonstrated the effectiveness of the proposed method under both open-set and adversarial attack settings on three publicly available classification datasets. Finally, we showed that proposed method can be deployed for out-of-distribution detection task as well.
This work is partially supported by Research Grants Council (RGC/HKBU12200518), Hong Kong. Vishal M. Patel was supported by the DARPA GARD Program HR001119S0026-GARD-FP-052.
-  (2020) Anomaly detection-based unknown face pre- sentation attack detection. In IJCB, Cited by: §1.
-  (2016) Towards open set deep networks. In CVPR, Cited by: §1, §1, §2.
-  (2005) A non-local algorithm for image denoising. In CVPR, Cited by: §4.
Towards evaluating the robustness of neural networks. In SP, Cited by: §1, §2.
-  (2009) Imagenet: a large-scale hierarchical image database. In CVPR, Cited by: §5.1.
-  (2015) Unsupervised visual representation learning by context prediction. In ICCV, Cited by: §2.
-  (2017) Multi-task self-supervised visual learning. In ICCV, Cited by: §2.
-  (2018) Robust physical-world attacks on deep learning models. In CVPR, Cited by: §1.
-  (2017) Generative openmax for multi-class open set classification. BMVC. Cited by: §1, §1, §2.
-  (2018) Unsupervised representation learning by predicting image rotations. ICLR. Cited by: §2, §4, §4.
-  (2014) Explaining and harnessing adversarial examples. ICLR. Cited by: §1, §1, §2, §3.
CIIDefence: defeating adversarial attacks by fusing class-specific image inpainting and image denoising. In CVPR, Cited by: §2.
-  (2016) Deep residual learning for image recognition. In CVPR, Cited by: §1, §4, §5.5.
-  (2017) A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR. Cited by: §5.3.
-  (2019) Using self-supervised learning can improve model robustness and uncertainty. In NIPS, Cited by: §4.
-  Cifar-10(canadian institute for advanced research). Cited by: §5.1.
-  (2019) Adversarial defense via learning to generate diverse attacks. In ICCV, Cited by: §1.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §4.
-  (2017) Adversarial examples in the physical world. ICLR Workshop. Cited by: §1, §1, §2.
-  (2019) Learning modality-consistency feature templates: A robust rgb-infrared tracking system. In IEEE Trans. Industrial Electronics, 66(12), 9887–9897, Cited by: §1.
-  (2018) Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR. Cited by: §5.3.
-  (2018) Defense against adversarial attacks using high-level representation guided denoiser. In CVPR, Cited by: §1, §2.
-  (2008) Visualizing data using t-sne. Journal of machine learning research 9 (Nov), pp. 2579–2605. Cited by: §5.5.
-  (2018) Towards deep learning models resistant to adversarial attacks. ICLR. Cited by: §1, §2, §3, §5.2.
-  (2018) Open set learning with counterfactual images. In ECCV, Cited by: §1, §1, §2, §4, §5.3.
-  (2011) Reading digits in natural images with unsupervised feature learning. Cited by: §5.1.
-  (2020) Multiple class novelty detection under data distribution shift. In ECCV, Cited by: §2.
-  (2020) Utilizing patch-level activity patterns for multiple class novelty detection. In ECCV, Cited by: §2.
-  (2018) One-class convolutional neural network. IEEE Signal Processing Letters 26 (2), pp. 277–281. Cited by: §2.
-  (2019) C2ae: class conditioned auto-encoder for open-set recognition. In CVPR, Cited by: §1, §1, §2, §4.
Deep transfer learning for multiple class novelty detection. In CVPR, Cited by: §1, §2.
-  (2020) Generative-discriminative feature representations for open-set recognition. In CVPR, Cited by: §1.
-  (2019) OCGAN: one-class novelty detection using gans with constrained latent representations. In CVPR, Cited by: §1.
Learning deep features for one-class classification. IEEE Transactions on Image Processing 28 (11), pp. 5450–5463. Cited by: §2.
-  (2013-07) Towards open set recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence (T-PAMI) 35. Cited by: §2.
-  (2019) Multi-adversarial discriminative deep domain generalization for face presentation attack detection. In CVPR, Cited by: §1.
-  (2017) Deep convolutional dynamic texture learning with adaptive channel-discriminability for 3D mask face anti-spoofing. In IJCB, Cited by: §1.
-  (2018) Feature constrained by pixel: hierarchical adversarial deep domain adaptation. In ACM MM, Cited by: §1.
-  (2019) Joint discriminative learning of deep dynamic textures for 3D mask face anti-spoofing. In IEEE Trans. Inf. Forens. Security, 14(4): 923-938, Cited by: §1.
-  (2020) Regularized fine-grained meta face anti-spoofing.. In AAAI, Cited by: §1.
-  (2019) Adversarial auto-encoder for unsupervised deep domain adaptation. In IET Image Processing, Cited by: §1.
-  (2020) Federated face anti-spoofing. arXiv preprint arXiv:2005.14638. Cited by: §1.
-  (2014) Intriguing properties of neural networks. ICLR. Cited by: §2.
-  (2019) Feature denoising for improving adversarial robustness. In CVPR, Cited by: §1, §1, §1, §2, §4, §4, §5.2.
-  (2020) Deep learning for person re-identification: a survey and outlook. arXiv preprint arXiv:2001.04193. Cited by: §1.
-  (2020) Augmentation invariant and instance spreading feature for softmax embedding. IEEE transactions on pattern analysis and machine intelligence. Cited by: §1.
-  (2019) Unsupervised embedding learning via invariant and spreading instance feature. In CVPR, Cited by: §1.
-  (2019) Classification-reconstruction learning for open-set recognition. In CVPR, Cited by: §1, §1, §2, §4, §5.3.
-  (2015) Lsun: construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365. Cited by: §5.3.
-  (2016) Sparse representation-based open set recognition. IEEE transactions on pattern analysis and machine intelligence 39 (8), pp. 1690–1696. Cited by: §1.