Deep CNN-based Multi-task Learning for Open-Set Recognition

03/07/2019 ∙ by Poojan Oza, et al. ∙ Johns Hopkins University 18

We propose a novel deep convolutional neural network (CNN) based multi-task learning approach for open-set visual recognition. We combine a classifier network and a decoder network with a shared feature extractor network within a multi-task learning framework. We show that this approach results in better open-set recognition accuracy. In our approach, reconstruction errors from the decoder network are utilized for open-set rejection. In addition, we model the tail of the reconstruction error distribution from the known classes using the statistical Extreme Value Theory to improve the overall performance. Experiments on multiple image classification datasets are performed and it is shown that this method can perform significantly better than many competitive open set recognition algorithms available in the literature. The code will be made available at: github.com/otkupjnoz/mlosr.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

page 8

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: (a) Typical example of open-set recognition. Here rectangles, triangles, and circles correspond to three different classes known during training and cross samples correspond to unknown class examples that appear only during testing. The goal is to learn a decision boundary such that it discriminates among all three known classes and rejects the unknown class samples. (b) Animal classification example under open-set scenario. Sample images corresponding to the known classes (i.e. closed set samples) are shown in the first row. Closed set classes are Cat, Dog, and Horse. The second row shows images corresponding to in-distribution unknown classes i.e., Lion, Zebra, Cow, etc. These animals do not appear during training. The third row shows sample images corresponding to the out-of-distribution unknown classes, i.e., Bridge, Bedroom, House, etc. These images do not contain any animals and are from an entirely different set of categories not seen during training.
Figure 2: Left: This graph shows the experimental results of testing a CNN in a closed-set setting on the COIL100 object classification dataset nene1996columbia . For the same number of training and testing classes, CNN with SoftMax scores achieves a stable performance of around 100% accuracy. Middle: Training images of objects shown in the set of images on the left have examples of known categories. For closed-set setting testing images of objects are similar to training images. However, for open-set testing, the images of objects contain a mix of both known and unknown class samples, shown in the set of images on the right. Right: This graph shows the experimental results of testing a CNN in an open-set setting. The number of known classes is kept fix to 15. While, starting with 0 (i.e., closed-set setting) the unknown classes are increased up to 85. It can be seen from the graph that increasing unknown classes during testing significantly affects the performance of a CNN with SoftMax scores, showing the vulnerability of the closed-set model in identifying unknown classes. On the other hand, the performance of the proposed approach, MLOSR, is much more robust. Jain et al. jain2014multi perform a similar experiment for multi-class SVM.

Recent developments in deep convolutional neural networks (CNNs) have shown impressive performance on various computer vision tasks such as image classification and object recognition alexnet12 , resnet16 , senet17

. The top-5 error rate of recent image classification methods on ImageNet dataset

imagenet15 has dropped from 25% winner11 to 2.25% winner17 . Even though the performance of these systems look very promising, their setting is not realistic. A vast majority of these algorithms follow a closed-set setting, where the knowledge of all the test classes is assumed to be available during training. However, real-world applications contain many more categories compared to the number of categories present in any of the current datasets, e.g., ImageNet (1000 classes). Hence, when deployed, these systems are highly likely to observe test samples belonging to unknown classes, i.e., classes not observed during training. Because of the closed set assumption, these systems will wrongly recognize a test sample from an unknown class as a sample belonging to one of the known closed set classes.

Open-set recognition was introduced to tackle this problem, extending closed set classification task in a more realistic scenario scheirer2013toward . In open-set recognition, an incomplete knowledge of the world is assumed during training, i.e., test samples can be from any of the unknown classes, not observed during training (see Fig. 1). The goal of an open-set recognition system is to reject test samples from unknown classes while maintaining the performance on known classes. Since we only have access to the known classes during training phase, it is challenging to identify unknown classes in a closed-set setting. Scheirer et al. scheirer2013toward proposed a new framework by combining empirical risk minimization with open space risk minimization for open-set recognition problems. Open space risk is defined as the risk of labeling the unknown samples as known. Following this framework, there have been many open-set recognition algorithms proposed over the past few years that try to directly or indirectly minimize open space risk for training open-set recognition models. Jain et al. jain2014multi

showed vulnerability of Support Vector Machine (SVM) based classification (Fig. 1 in the paper

jain2014multi

) in the presence of unknown class test samples and proposed an approach to improve the identification of unknown classes. Many other extensions of the traditional statistical classification approaches for open-set recognition have also been proposed in the literature

scheirer2013toward , jain2014multi , scheirer2014probability , junior2016specialized , rudd2018extreme , junior2017nearest , bendale2015towards , zhang2017sparse .

As mentioned earlier, CNN is a powerful tool to learn discriminative representations for image classification. However, they are fundamentally limited in identifying unknown samples due to their closed-set training (refer to Fig. 2 for details). Hence, it is important to make CNN-based image classification algorithms capable of performing open-set recognition. There have been several methods proposed over the years to tackle the presence of unknown classes by extending deep neural networks in open-set settings bendale2016towards , shu2017doc , ge2017generative , neal2018open , yoshihashi2018classification . Bendale et al. bendale2016towards

proposed to use pre-trained penultimate activations from a neural network and extreme value modeling to update the SoftMax probability values for open-set recognition (referred to as Openmax). Ge

et al. ge2017generative used synthetic unknown classes generated using a Generative Adversarial Network (GAN) goodfellow2014generative and trained a neural network to classify those samples as unknown. Shu et al. shu2017doc

proposed a novel loss function by replacing the SoftMax layer with sigmoid activations (referred to as one-vs-rest layer) to train a neural network for open-set recognition. Neal

et al. neal2018open introduced another GAN-based data augmentation approach which generates synthetic unknown class images referred to as counterfactual images for open-set recognition. Yoshihashi et al. yoshihashi2018classification proposed a novel neural network architecture which involves hierarchical reconstruction blocks and extreme value model for open-set recognition. Though there has been a fair amount of research in developing CNN-based open-set recognition algorithms, the performance of these systems for challenging object recognition datasets is still far from optimal.

In this paper, we present a CNN-based multi-task learning algorithm for open-set recognition. The proposed Multi-task Learning Based Open-Set Recognition (MLOSR) method consists of a shared feature extractor network along with a decoder network and a classifier network for reconstruction and classification, respectively. All these networks are trained in a multi-task learning framework caruana1997multitask . We show that such multi-task training yields a better model for open-set recognition by improving the identification of samples from the unknown classes. Additionally, we utilize extreme value theory (EVT) modeling techniques to model the reconstruction error distribution from the network that further enhances the performance. Extensive experiments on multiple image classification datasets show that MLOSR performs better than existing open-set algorithms. In summary, the main contributions of this paper are as follows:

  • We propose a CNN-based multi-task learning algorithm, called MLOSR, for open-set recognition.

  • Extensive experiments on various datasets show that the proposed multi-task training helps to reject out-of-distribution data as well as samples from the in-distribution unknown classes.

This paper is organized as follows. Section 2

gives a brief review of open-set recognition and related problems such as out-of-distribution detection, anomaly detection, and EVT. Section 

3 introduces the proposed approach and presents training and testing details of the MLOSR algorithm. Experiments and results are presented in Section 4 and Section 5 concludes the paper with a brief summary and discussion.

2 Related Work

In this section, we provide some related works on open-set recognition, out-of-distribution detection, EVT and novelty detection.

2.1 Open-set Recognition

In recent years, a few attempts have been made to create a classifier with rejection option bartlett2008classification , yuan2010classification , da2014learning . Inspired from these earlier methods, Scheirer et al. scheirer2013toward formally defined the open-set recognition problem and introduced a framework to train and evaluate such algorithms. Scheirer et al. scheirer2013toward also introduced a simple slab model-based approach to address this problem. In follow up works by Scheirer et al. scheirer2014probability and Jain et al. jain2014multi , both proposed to leverage extreme value models on the SVM decision scores to extend the SVM-based classification in open-set setting. While Jain et al. jain2014multi utilized the decision scores only from One-vs-All multi-class SVM, Scheirer et al. scheirer2014probability combined the scores from multi-class SVM with class-specific one-class RBF-SVMs to get a better open-set model. Junior et al. junior2017nearest proposed a nearest neighbor-based classification approach based on the similarity scores calculated using the ratio of distances between the nearest neighbors, and identified any test sample as unknown having low similarity. Zhang and Patel zhang2017sparse proposed another approach by extending the sparse representation-based classification (SRC) to the open-set setting. They also discovered that the residual errors from SRC contain some discriminative information to identify known and unknown classes. These residual errors are modeled using EVT as match and non-match to identify unknown test samples by hypothesis testing.

Following these extensions of traditional classification algorithms for open-set recognition, Bendale et al. bendale2016towards became the first work to address the open-set recognition problem for deep neural networks. Since, thresholding on SoftMax probability does not yield a good model for open-set recognition bendale2016towards (also shown in Fig. 2), an alternative solution was proposed for adapting deep neural network to open-set settings. Bendale et al. bendale2016towards utilized the activation vectors from a penultimate layer of a pre-trained deep neural network. Modeling distance of these activation vectors from the mean of each class with EVT an updated penultimate vector is generated (referred to as OpenMax). This updated vector yields a better model for identifying unknown class test samples. Ge et al. ge2017generative introduced G-OpenMax algorithm which combines OpenMax with data augmentation using GANs. Ge et al. generated unknown samples from the known class data using GANs and later used them for training a CNN along with known classes. This data augmentation technique was shown to improve the unknown class identification. In another approach, Shu et al. shu2017doc argued that OpenMax inherently considers that hard to classify samples are more likely to be from the unknown classes and proposed a -sigmoid activation-based method, to overcome that issue. The sigmoid activation method replaces the SoftMax layer to train the network with a novel loss function. Neal et al. introduced another GAN-based data augmentation method. Instead of considering the misclassified samples generated using GAN as unknown classes like G-OpenMax, Neal et al. proposed a method to search for such examples, referred to as counterfactual-images. These counterfactual samples are later augmented with the original dataset as unknown class samples and are utilized to fine-tune the classification network. This technique was shown to be a better data augmentation approach than G-OpenMax for open-set recognition. Recently, Yoshihashi et al. yoshihashi2018classification proposed a novel neural network architecture for open-set recognition which consists of hierarchical reconstruction modules combined with extreme value modeling. To the best of our knowledge, it is the best performing open-set algorithm in the literature.

2.2 Out-of-Distribution Detection

Recently, some concerns have been raised regarding the safety of AI systems amodei2016concrete . These include Adversarial Attacks szegedy2013intriguing , yuan2017adversarial , Distribution Shifts hendrycks2016baseline , lee2017training etc. One of the concerns regarding the distribution shift is the detection of out-of-distribution (OOD) examples. Hendrycks et al. first introduced the problem of OOD in hendrycks2016baseline and defined it as the detection of samples that do not belong in the training set but can appear during testing. Several approaches have been proposed in the literature that address the OOD problem hendrycks2016baseline , liang2017enhancing , hendrycks2018deep , lee2017training , liu2018open . The experimental setup used for evaluating OOD usually includes two datasets, a clean set with finite categories available during training, and testing as well as OOD test set containing samples from a completely different distribution. For example, classification on CIFAR10 krizhevsky2014cifar will have OOD examples from LSUN yu2015lsun . Both OOD and open-set problems are studied separately even though the OOD problem setting resembles that of open-set recognition. Furthermore, the evaluation protocol followed by both OOD and open-set recognition problems is very similar. Hence, we will also explore the capability of the proposed open-set recognition method in detecting out-of-distribution samples.

2.3 Outlier-Novelty-Anomaly Detection

The problems such as discovering outliers

xia2015learning , you2017provable , niu2018learning , identifying novel classes abati2018and , perera2018learning and detecting anomalies golan2018deep , sabokrou2018adversarially , chalapathy2017robust also have some overlap with open-set recognition. Though all of these problems involve identifying abnormality/novelty, the problem setting differs from the open-set problem. Though anomaly/novelty detection problems do not have access to abnormal/novel classes during training phase, several works assume the availability of abnormal classes hendrycks2018deep

during training and they are mainly limited to one class recognition problems. Outlier detection allows access to outlier data during the training phase. On the other hand, open-set recognition problems do not have access to unknown class data and also deals with multi-class classification problem and hence is more challenging than outlier, novelty or anomaly detection.

2.4 Extreme Value Theory (EVT)

Extreme Value Theory has proven to be useful in many vision applications shi2008modeling , gibert2017deep , scheirer2017extreme , scheirer2011meta , fragoso2013evsac including open-set recognition scheirer2014probability , rudd2018extreme , yoshihashi2018classification , bendale2016towards , zhang2017sparse . This popularity is attributed by the fact that extreme value modeling of decision scores yields better performance than directly utilizing the raw score values scheirer2011meta , scheirer2017extreme . Extreme value modeling has became one of the most popular approaches for post recognition score analysis to improve the performance of open-set recognition. Inspired by these methods, the proposed approach also utilizes EVT to obtain better recognition scores.

3 Proposed Approach

Figure 3: An overview of the proposed MLOSR algorithm. The feature extractor network () takes any input image () and produces a latent representation . This latent representation is used by the classifier () and the decoder () to predict class label and to reconstruct the input (), respectively. These networks are trained to perform both classification and reconstruction task in a multi-task framework. The tail of the reconstruction error distribution is modeled using EVT. The classification score and the EVT probability of reconstruction error is used during testing to perform open set recognition.

In this section, we describe the proposed approach in detail. The proposed system architecture is composed of four modules:
1. Feature Extractor (or Encoder) (),
2. Decoder (),
3. Classifier (), and
4. Extreme Value Model ().
Fig. 3 shows these modules of the proposed system. The feature extractor (or encoder) network () is modeled by a CNN architecture which maps an input image onto a latent space representation. The decoder (), modeled by another CNN and a classifier (), modeled by a fully-connected neural network, take this latent representation as input and produce a reconstructed image and its label as the outputs, respectively. Both the decoder network and the classifier network share the feature extractor module. After the models , and are trained, the reconstruction errors are modeled using EVT. In the following sections, we present the training procedure to learn the parameters (, , ) and discuss the recognition score analysis using EVT.

3.1 Training Procedure

The feature extractor network can be represented by a function, . Similarly, let the decoder and the classifier networks be represented by functions and , respectively. Here, is the space of latent representations, is the space of all images, and is the space of all possible image labels. Let , and be the parameters of , and , respectively. The classification loss, denoted by , penalizes the network for misclassifying known class samples. The reconstruction loss, denoted by , penalizes the network for generating images away from the known class samples. Let and be a sample from any of the known classes and it’s corresponding label. Let be the reconstructed input from the encoder-decoder pipeline, +. Also, is the predicted class probability vector by the encoder-classifier pipeline, +. The loss function depends on the parameters , which are associated with the networks and , respectively. Similarly, the loss function depends on the parameters , which are associated with the networks and , respectively. We can formulate the losses for the input images with batch size of as follows

(1)
(2)

where, is a one-hot vector for label . Also, and can be implemented using any valid classification and reconstruction loss functions, respectively. For this method we consider cross-entropy for the classification loss () and -norm of the vectorized images for the reconstruction loss (). and are defined as follows,

(3)
(4)

The final loss to train the overall network is as follows

(5)

Here, and are two constants and is the total number of classes known during training. After training, the learned parameters , , will yield an open-set recognition model.

3.2 Extreme Value Model

Figure 4: Reconstruction error distributions for known and unknown classes, computed using the COIL-100 dataset.

As discussed in Section 2, EVT is useful in many vision applications. Some open-set recognition algorithms scheirer2014probability , rudd2018extreme , yoshihashi2018classification , bendale2016towards , zhang2017sparse also utilize this tool to model the tail part of the distribution corresponding to the recognition scores. Histograms corresponding to the reconstruction errors from both known (shown in green) and unknown (shown in red) class samples computed using the COIL-00 dataset are shown in Fig. 4

. As can be seen from this figure that the reconstruction errors contain some information to discriminate between unknown and known classes. Since, during training we do not have access to the samples from unknown classes, the region of optimal decision threshold must lie somewhere within the set of extremes values from the known class reconstruction errors (the overlapped region). With this observation, we also use EVT to model the tail part of the reconstruction error distribution to achieve a better estimate of the tail data.

1:Network models , ,
2:Initial respective parameters , ,
3:Labeled known data,
4:Hyper-parameters : , , ,
5:Latent space representation,
6:Prediction probabilities,
7:predict known label,
8:while not done do
9: for each batch with size do
10: for i = to do
11:
12:
13:
14: end for
15: Calculate , using Eq. 1, and 2
16:
17: Update :
18: Update :
19: Update :
20: end for
21:end while
22:Output: Learned parameters , ,
Algorithm 1 Pseudocode for MLOSR Training

There are two widely used theorems to model statistical extremes, namely, Fisher-Tippett-Gnedenko theorem (FTG) scheirer2017extreme and Picklands-Balkema-deHaan formulation or Generalized Extreme Value theorem (GEV) pickands1975statistical , balkema1974residual . Some works follow FTG scheirer2014probability , bendale2016towards , yoshihashi2018classification while others follow GEV zhang2017sparse . In this method, we consider extreme value formulation by Picklands-Balkema-deHaan or GEV. It states that for large enough threshold , for a large class of distributions denoted as , with , IID samples, the following equation is well approximated by a Generalized Pareto Distribution (GPD), denoted as ,

where,

such that , , and . is CDF of GPD and hence, denotes the probability that extremes of distribution is less than some value . This probability score will be useful in making decision about whether a given test sample is from known classes or not. The parameters and can be estimated from the given tail data, using maximum likelihood estimation procedure, provided by Grimshaw et al. grimshaw1993computing . Here, there are two user defined parameters - the tail size and the threshold value to make the decision on known/unknown classes (more information is provided in Section 4). The MLOSR training and testing procedures are summarized in Algorithms 1 and 2, respectively.

1:Trained network models , ,
2:Learned parameters , ,
3:EVT model , threshold
4:Test image , either from known or unknown data
5:Latent space representation,
6:Prediction probabilities,
7:predict known label,
8:Reconstructed Image,
9:Reconstruction Error,
10:if do
11: predict as Known, with label
12:else do
13: predict as Unknown
14:end if
Algorithm 2 Pseudocode for MLOSR Testing

4 Experiments and Results

In this section, we demonstrate the effectiveness of the proposed MLOSR approach by conducting various experiments on the COIL-100 nene1996columbia , MNIST lecun2010mnist , SVHN netzer2011reading , CIFAR10 krizhevsky2009learning and TinyImageNet le2015tiny datasets. In particular, we first present analysis of the proposed approach on the COIL-100 dataset. Then, we compare the performance of MLOSR with recent state-of-the-art open-set recognition algorithms on four image classification datasets (MNIST, SVHN, CFAR10, TinyImageNet). In these experiments, unknown classes are sampled from within the dataset by dividing the total number of classes into known and unknown categories. In the last set of experiments, we test the ability of MLOSR to detect out-of-distribution unknown samples. In this experiment, the in-distribution samples are from CIFAR10 and out-of-distribution samples are from ImageNet and LSUN, as provided by liang2017enhancing .

The networks are trained using the Adam optimizer kingma2015adam with the learning rate () of 0.0003 and batch size () of 64. We stop the training when loss becomes sufficiently small. Both weights and for reconstruction and classification loss are set equal to 0.5. For EVT modeling, we keep the tail-size of 20 for all experiments. The decision threshold during testing is set equal to 0.5, i.e., identify any sample with reconstruction error and probability less than 0.5 as unknown.

4.1 Experiment I: Analysis of MLOSR

In this experiment, we perform the quantitative and the qualitative analysis to give insights into the proposed MLOSR algorithm. For quantitative analysis, we measure the performance gain contributed by each module of the overall algorithm. The qualitative study provides visual examples of the reconstructed known and unknown test samples.

4.1.1 Quantitative Analysis

Figure 5: Quantitative Analysis: Ablation study of MLOSR on the COIL-100 dataset.

We perform ablation experiments showing the contribution from each modules of MLOSR on the COIL-100 dataset. The COIL-100 dataset has 100 different object categories with varying pose. Out of 100 classes randomly sampled 15 classes are used as known classes and the remaining are used as unknown. The dataset contains colored images of size . For this experiment, each image is converted into gray-scale, resized to and intensity values are normalized between . The network architectures used for this experiments are as follows,
Encoder:

Conv(32)-ReLU-Conv(64)-ReLU-Conv(128)-FC(512).


Decoder:FC(8192)-ConvTran(64)-ReLU-ConvTran(32)-ReLU-ConvTran(1)-Tanh.
Classifier:FC(512)-FC(15).
Here, Conv(L) and ConvTran(L) denote a convolution and transposed-convolution layer with L filters of size

and stride 2 respectively, ReLU and Tanh are activation units and FC(L) denote fully connected layer with L neurons.

For ablation analysis, the performance is measured using F-measure (or F1 score) against varying Openness scheirer2013toward of the problem. Openness is defined as follows

(6)

Here, is the number of target classes, is the number of train classes and is the number of test classes scheirer2013toward

. For this experiment openness is varied by keeping 15 classes as known and changing the number of unknown classes from 15 to 85. This corresponds to the change in openness from 18% to 49%. The performance and the corresponding errors for each openness are calculated as average and standard deviation of five randomized trials.

We consider the following methods as baselines for comparison:
1. DCN+SoftMax: Encoder and classifier networks are trained using the classification loss and the SoftMax scores are used for closed-set classification. A test sample is identified as unknown if the leading SoftMax score (between ) is less than 0.5. This baseline is a traditional closed-set model with a threshold over the SoftMax scores.
2. DCN+AE: Encoder and classifier networks are trained using and SoftMax scores are used for classification. However, to identify any test sample as unknown, an auto-encoder is used, with encoder-decoder architectures as described above. This encoder-decoder pipeline is trained with a reconstruction loss of . A test sample is identified as unknown when the reconstruction error is more than 50% of the maximum reconstruction error observed on training samples. Another difference to note is that the encoder network is not shared across encoder-classifier and encoder-decoder pipelines. Instead, these pipelines are trained separately with two different encoder networks having the same architecture. This baseline shows the use of reconstruction error as a score compared to SoftMax scores for identifying unknown classes. It also provides a baseline to compare with multi-task training having a shared encoder network.
3. MLOSR (without EVT): Encoder-classifier and encoder-decoder pipelines are trained with a shared encoder network using and loss function in a multi-task fashion. SoftMax scores and reconstruction errors are utilized for closed-set classification and identifying unknown classes, respectively. Method to identify any test sample as unknown is similar to the method mentioned in the previous baseline. This method provides a baseline to compare the performance with and without extreme value modeling and shows benefits of multi-task training.
4. MLOSR (Proposed): This is the method proposed in this paper, where after multi-task training of encoder, classifier and decoder networks, EVT models the tail part of the reconstruction errors from known classes as described in Section 3.2. A test sample is identified as unknown when the reconstruction error has less than 0.5 probability of coming from a known class. Extreme value model () provides the probability score.

From Fig. 5 it is clear that DCN+SoftMax is not an optimal model for open-set recognition and has the worst performance among all the baselines. DCN+AE shows that utilizing reconstruction errors from an encoder-decoder trained on known classes elevates the performance of the open-set recognition model. It shows that the reconstruction errors are better than SoftMax scores for identifying unknown classes. Furthermore MLOSR (without EVT) shows that DCN+AE performance can be further improved by utilizing a multi-task training strategy with shared encoder, and improves the open-set performance of the model even further. Finally, the MLOSR utilizes the extreme value model on the known class to better model the tail part of the reconstruction error distribution. This in turn gives improvements over MLOSR (without EVT) under varying openness.

4.1.2 Qualitative Analysis

Fig. 6 shows the qualitative analysis of the MLOSR algorithm in cases where the test input is from known and unknown classes. Models trained with MLOSR produce output that correctly reconstructs the test input if they are from the known classes, resulting in low reconstruction errors. On the other hand, for the test samples from unknown classes, MLOSR produces distorted outputs resulting in high reconstruction errors.

Figure 6: Qualitative Analysis: (a) Known and (b) Unknown class samples from the COIL100 dataset with the corresponding images reconstructed by MLOSR.

4.2 Experiment II: Open-set Recognition

For the open-set recognition experiments, we use the testing protocol followed in neal2018open . For the encoder, decoder and classifier networks, the architectures are also the same as neal2018open . For all other methods compared, the same architecture is also followed for this experiment. All images are resized to and intensity values are normalized between for each color channel. The following experimental protocols are followed:

(i) MNIST: MNIST lecun2010mnist has total 10 digit classes having images of handwritten digits. Out of which the number of randomly chosen known and unknown classes used are 6 and 4, respectively. This choice results in the openness of 13.39%.
(ii) SVHN: SVHN netzer2011reading is a digit dataset, where the images of each digit is cropped from house number image data collected from the google street view images. SVHN has a total of 10 digit classes, Similar to MNIST, for SVHN randomly chosen 6 known and 4 unknown classes are used with the openness of 13.39%.
(iii) CIFAR10: CIFAR10 krizhevsky2014cifar consists of 10 object categories. Out of which, we randomly choose 6 known and 4 unknown classes which results in the openness of 13.39%.
(iv) CIFAR+10: CIFAR+10 uses 4 classes from CIFAR10 that are non animal categories and 10 classes from CIFAR100 krizhevsky2014cifar are randomly sampled from the animal categories as known and unknown classes, respectively. This results in the openness of 33.33%.
(v) CIFAR+50: Similar to CIFAR+10, 4 non-animal CIFAR10 categories as sampled as known classes and 50 animal categories from CIFAR100 as sampled as unknown, resulting in the openness of 62.86%.
(vi) TinyImageNet: TinyImageNet le2015tiny dataset is derived from ImageNet imagenet15 by reducing the number of classes and image sizes. It has a total of 200 categories with 500 images per category for training and 50 for testing. From 200 categories 20 known and 180 unknown classes are randomly sampled, resulting in openness of 57.35%.

Method MNIST SVHN CIFAR10 CIFAR+10 CIFAR+50 TinyImageNet
SoftMax 0.978 0.886 0.677 0.816 0.805 0.577
OpenMax bendale2016towards (CVPR’16) 0.981 0.894 0.695 0.817 0.796 0.576
G-OpenMax ge2017generative (BMVC’17) 0.984 0.896 0.675 0.827 0.819 0.580
OSRCI neal2018open (ECCV’18) 0.988 0.910 0.699 0.838 0.827 0.586
CROSR yoshihashi2018classification (CVPR’19) 0.998 0.955 —— —— —— 0.670
MLOSR 0.989 0.921 0.845 0.895 0.877 0.718
Table 1: Comparison of MLOSR with the most recent open-set recognition algorithms. Best and the second best performing methods are highlighted in bold and italics fonts, respectively.
Backbone Network
Method ImageNet-crop ImageNet-resize LSUN-crop LSUN-resize
VGGNet SoftMax 0.639 0.653 0.642 0.647
OpenMax 0.660 0.684 0.657 0.668
LadderNet + SoftMax 0.640 0.646 0.644 0.647
LadderNet + OpenMax 0.653 0.670 0.652 0.659
DHRNet + SoftMax 0.645 0.649 0.650 0.649
DHRNet + OpenMax 0.655 0.675 0.656 0.664
DHRNet + CROSR 0.721 0.735 0.720 0.749
MLOSR 0.837 0.826 0.783 0.801
DenseNet SoftMax 0.693 0.685 0.697 0.722
OpenMax 0.696 0.688 0.700 0.726
DHRNet + SoftMax 0.691 0.726 0.688 0.700
DHRNet + OpenMax 0.729 0.760 0.712 0.728
DHRNet + CROSR 0.733 0.763 0.714 0.731
MLOSR 0.903 0.896 0.871 0.929
Table 2: Open-set recognition on CIFAR10 for Out-Of-Distribution detection. Best and the second best performing methods are highlighted in bold and italics fonts, respectively.

The performance of the method is measured by its ability to identify unknown classes. Following the protocol from neal2018open , the Area Under the ROC curve (AUROC) is used to measure the performance of different methods. The values reported in Table 1 are averaged over five randomized trials. The numbers corresponding to CROSR are taken from yoshihashi2018classification . The results corresponding to all the other methods except MLOSR (proposed method), are taken from neal2018open . CROSR yoshihashi2018classification did not report its performance on the CIFAR10, CIFAR+10, and CIFAR+50 datasets, hence those numbers are not included here. The results on digits dataset are mostly saturated, and almost all methods perform more or less similar. CROSR achieves the best performance on digits dataset, with next best performance from MLOSR. However, for TinyImageNet which is much more challenging object classification dataset, MLOSR performs better than CROSR giving an improvement of 5%.

4.3 Experiment III : Out-Of-Distribution Detection

In this experiment, we test the ability of MLOSR to identify OOD samples. Following the protocol defined by Yoshihashi et al. in yoshihashi2018classification , which uses in-distribution samples from CIFAR10 and samples from four different datasets (ImageNet-crop, Imagenet-resize, LSUN-crop and LSUN-resize) as OOD samples. These four OOD datasets were developed specifically for CIFAR10 by liang2017enhancing . Following the setup of yoshihashi2018classification , OOD experiments use two backbone network architectures, VGGNet (referred in the paper as Plain CNN) and DenseNet. VGG, which consists of 13 layers, is a modified version of the VGG architecture as defined in simonyan2014very . DenseNet follows the network architecture defined by huang2017densely for CIFAR10. It has a depth of 92 and a growth rate of 24. Decoder architecture for all experiments is the same as used in the open-set experiments (modified to accommodate the image size), and the classifier architecture is a simple one layer fully-connected network with 10 neurons corresponding to 10 categories of CIFAR10 dataset, for all OOD experiments. We consider LadderNet valpola2015neural and DHRNet baselines, where DHRNet architecture is a novel open-set method proposed by yoshihashi2018classification . The performance is measured using F-measure (or F-1 score). All the images are of size . For training we use all 50,000 training samples of CIFAR10 and evaluate the trained model on 10,000 OOD samples for each experiments, i.e., ImageNet-resize, LSUN-resize, ImageNet-crop, LSUN-crop. All the reported numbers except MLOSR are taken from yoshihashi2018classification .

The Table 2 shows that for all OOD experiments MLOSR performs significantly better than other open-set algorithms. Second best performing method is DHRNet with CROSR both proposed in yoshihashi2018classification . With VGGNet backbone, MLOSR is able to improve the performance by 11.6%, 9.31%, 6.3% and 5.2% for OOD samples from ImageNet-crop, ImageNet-resize, LSUN-crop, LSUN-resize, respectively. On average, MLOSR with the VGGNet backbone performs better than the next best method by 8.05%. Furthermore, MLOSR with DenseNet backbone significantly improves the performance by 17.0%, 13.3%, 15.7% and 19.8% for OOD samples from ImageNet-crop, ImageNet-resize, LSUN-crop and LSUN-resize, respectively. On average, MLOSR with DenseNet backbone performs better than the next best method by a significant 16.45%. Overall MLOSR achieves 12.25% improvement on average over the next best method (i.e., in both cases it is DHRNet+CROSR). Even though, there is no comparison provided for CIFAR10 dataset in Section 4.2 for open-set recognition experiments, results from OOD experiments show that for particular object datasets, MLOSR shows better performance than DHRNet+CROSR in identifying unknown samples.

5 Conclusion

We presented an open-set recognition algorithm for deep neural networks called Multi-task Learning for Open-Set Recognition (MLOSR). MLOSR uses encoder, decoder and classifier networks trained in a multi-task framework. We compare the performance of MLOSR with state of the art open-set algorithms and show better overall performance. Furthermore, we validate the ability of MLOSR to counter Out-Of-Distribution unknown samples by conducting experiments with the CIFAR10 dataset. MLOSR emerges as the best performing algorithm showing significant improvements over the baselines. Experiments show that MLOSR is able to deal with unknown samples better than recent competitive methods. It achieves the state-of-the-art performance on various open-set recognition and OOD datasets.

Acknowledgements.
This work was supported by the NSF grant 1801435.

References