C2AE: Class Conditioned Auto-Encoder for Open-set Recognition

04/02/2019 ∙ by Poojan Oza, et al. ∙ Johns Hopkins University 12

Models trained for classification often assume that all testing classes are known while training. As a result, when presented with an unknown class during testing, such closed-set assumption forces the model to classify it as one of the known classes. However, in a real world scenario, classification models are likely to encounter such examples. Hence, identifying those examples as unknown becomes critical to model performance. A potential solution to overcome this problem lies in a class of learning problems known as open-set recognition. It refers to the problem of identifying the unknown classes during testing, while maintaining performance on the known classes. In this paper, we propose an open-set recognition algorithm using class conditioned auto-encoders with novel training and testing methodology. In contrast to previous methods, training procedure is divided in two sub-tasks, 1. closed-set classification and, 2. open-set identification (i.e. identifying a class as known or unknown). Encoder learns the first task following the closed-set classification training pipeline, whereas decoder learns the second task by reconstructing conditioned on class identity. Furthermore, we model reconstruction errors using the Extreme Value Theory of statistical modeling to find the threshold for identifying known/unknown class samples. Experiments performed on multiple image classification datasets show proposed method performs significantly better than state of the art.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 11

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Figure 1: Open-set recognition problem: Data samples from Blue Jay, Seal, Dog and Penguin are from the known class set (). Also, many classes not known during training, will be present at testing, i.e., samples from unknown class set (). The goal is to correctly classify any sample coming from set , as either Blue Jay, Seal, Dog or Penguin and identify samples coming from as unknown.

Recent advancements in computer vision have resulted in significant improvements for image classification systems

[13], [19], [14], [39]

. Especially the rise of Deep Convolutional Neural Network has resulted in classification error rates surpassing the human-level performance

[12]. These promising results, enable their potential use in many real world applications. However, when deployed in a real world scenario, such systems are likely to observe samples from classes not seen during training (i.e. unknown classes also referred as “unknown unknowns[35]). Since, the traditional training methods follow this closed-set assumption, the classification systems observing any unknown class samples are forced to recognize it as one of the known classes. As a result, it affects the performance of these systems, as evidenced by Jain et al. with digit recognition example. Hence, it becomes critical to correctly identify test samples as either known or unknown for a classification model. This problem setting of identifying test samples as known/unknown and simultaneously correctly classifying all of known classes, is referred to as open-set recognition [35]. Fig. 1 illustrates a typical example of classification in the open-set problem setting.

In an open-set problem setting, it becomes challenging to identify unknown samples due to the incomplete knowledge of the world during training (i.e. only the known classes are accessible). To overcome this problem many open-set methods in the literature [4], [36], [40], [38] adopt recognition score based thresholding models. However, when using these models one needs to deal with two key questions, 1) what is a good score for open-set identification? (i.e., identifying a class as known or unknown), and given a score, 2) what is a good operating threshold for the model?

. There have been many methods that explore these questions in the context of traditional methods such as Support Vector Machines

[35], [36], Nearest Neighbors [16], [3] and Sparse Representation [40]. However, these questions are relatively unexplored in the context of deep neural networks [38], [4], [24], [8], [7].

Even-though deep neural networks are powerful in learning highly discriminative representations, they still suffer from performance degradation in the open-set setting [4]. In a naive approach, one could apply a thresholding model on SoftMax scores. However, as shown by experiments in [4], that model is sub-optimal for open-set identification. A few methods have been proposed to better adapt the SoftMax scores for open-set setting. Bendale et al. proposed a calibration strategy to update SoftMax scores using extreme value modeling [4]. Other strategies, Ge et al. [8] and Lawrence et al. [24] follow data augmentation technique using Generative Adversarial Networks (GANs) [10]. GANs are used to synthesize open-set samples and later used to fine-tuning to adapt SoftMax/OpenMax scores for open-set setting. Shu et al. [38]

introduced a novel sigmoid-based loss function for training the neural network to get better scores for open-set identification.

All of these methods modify the SoftMax scores, so that it can perform both open-set identification and maintain its classification accuracy. However, it is extremely challenging to find a single such score measure, that can perform both. In Contrast to these methods, in proposed approach the training procedure for open-set recognition using class conditional auto-encoders, is divided it into two sub-tasks, 1. closed-set classification, and 2. open-set identification. These sub-tasks are trained separately in a stage-wise manner. Experiments show that such approach provides good open-set identification scores and it is possible to find a good operating threshold using the proposed training and testing strategy.

In summary, this paper makes following contributions,

  • [topsep=0pt,noitemsep,leftmargin=*]

  • A novel method for open-set recognition is proposed with novel training and testing algorithm based on class conditioned auto-encoders.

  • We show that dividing open-set problem in sub-tasks can help learn better open-set identification scores.

  • Extensive experiments are conducted on various image classification datasets and comparisons are performed against several recent state-of-the-art approaches. Furthermore, we analyze the effectiveness of the proposed method through ablation experiments.

2 Related Work

Figure 2: Block diagram of the proposed method: 1) Closed-set training, Encoder () and Classifier () are trained with the traditional classification loss. 2) Open-set Training, To train an open-set identification model, auto-encoder network Encoder () with frozen weights, and Decoder (), are trained to perfectly or poorly reconstruct the images depending on the label condition vector. Reconstruction errors are then modeled using the extreme value distribution to find the operating threshold of the method. 3) Open-set Testing, Open-set recognition model produces the classification prediction () and reconstruction errors, conditioned with each condition vector. If the minimum reconstruction error is below the threshold value obtained from the EVT model, the test sample is classified as one of the classes, or else it is classified as unknown.

Open-set Recognition. The open-set recognition methods can be broadly classified in to two categories, traditional methods and neural network-based methods. Traditional methods are based on classification models such as Support Vector Machines (SVMs), Nearest Neighbors, Sparse Representation etc. Scheirer et al. [36] extended the SVM for open-set recognition by calibrating the decision scores using the extreme value distribution. Specifically, Scheirer et al. [36] utilized two SVM models, one for identifying a sample as unknown (referred as CAP models) and other for traditional closed-set classification. PRM Junior et al. [15] proposed a nearest neighbor-based open-set recognition model utilizing the neighbor similarity as a score for open-set identification. PRM Junior et al.

later also presented specialized SVM by constraining the bias term to be negative. This strategy in the case of Radial Basis Function kernel, yields an open-set recognition model. Zhang

et al. [40] proposed an extension of the Sparse Representation-based Classification (SRC) algorithm for open-set recognition. Specifically, they model residuals from SRC using the Generalized-Pareto extreme value distribution to get score for open-set identification.

In neural network-based methods, one of the earliest works by Bendale et al. [4] introduced an open-set recognition model based on “activation vectors” (i.e. penultimate layer of the network). Bendale et al. utilized meta-recognition for multi-class classification by modeling the distance from “mean activation vector” using the extreme value distribution. SoftMax scores are calibrated using these models for each class. These updated scores, termed as OpenMax, are then used for open-set identification. Ge et al. [8] introduced a data augmentation approach called G-OpenMax. They generate unknown samples from the known class training data using GANs and use it to fine-tune the closed-set classification model. This helps in improving the performance for both SoftMax and OpenMax based deep network. Along the similar motivation, Neal et al. [24] proposed a data augmentation strategy called counterfacutal image generation. This strategy also utilizes GANs to generate images that resemble known class images but belong to unknown classes. In another approach, Shu et al. [38] proposed a

-sigmoid activation-based novel loss function to train the neural network. Additionally, they perform score analysis on the final layer activations to find an operating threshold, which is helpful for open-set identification. There are some variation of open-set recognition by relaxing its formulation in the form of anomaly detection

[26], [27], [30]

and novelty detection

[31], [28] etc, but for this paper we only focus on the general open-set recognition problem.

Extreme Value Theory. Extreme value modeling is a branch of statistics that deals with modeling of statistical extremes. The use of extreme value theory in vision tasks largely deals with post recognition score analysis [29], [36]. Often for a given recognition model the threshold to reject/accept lies in the overlap region of extremes of match and non-match score distributions [37]. In such cases, it makes sense to model the tail of the match and non-match recognition scores as one of the extreme value distributions. Hence, many visual recognition methods including some described above, utilize extreme value models to improve the performance further [40], [36]. In the proposed approach as well, the tail of open-set identification scores are modeled using the extreme value distribution to find the optimal threshold for operation.

3 Proposed Method

The proposed approach divides the open-set recognition problem into two sub-tasks, namely, closed-set classification and open-set identification. The training procedure for these tasks are shown in Fig. 2 as stage-1 and stage-2. Stage-3 in Fig. 2 provides overview of the proposed approach at inference. In what follows, we present details of these stages.

3.1 Closed-set Training (Stage 1)

Given images in a batch , and their corresponding labels . Here is the batch size and . The encoder () and the classifier () with parameters and , respectively are trained using the following cross entropy loss,

(1)

where, is an indicator function for label (i.e.

, one hot encoded vector) and

is a predicted probability score vector.

is probability of the sample being from the class.

3.2 Open-set Training (Stage 2)

There are two major parts in open-set training, conditional decoder training, followed by EVT modeling of the reconstruction errors. In this stage, the encoder and classifier weights are fixed and don’t change during optimization.

3.2.1 Conditional Decoder Training

(a) Normalized histogram of match and non-match reconstruction errors.
(b) Normalized histogram of known and unknown reconstruction errors.
Figure 3: Histogram of the reconstruction errors corresponding to the SVHN dataset.

For any batch described in Sec. 3.1, is used to extract the latent vectors as, . This latent vector batch is conditioned following the work by Perez et al. [32] called FiLM. FiLM influences the input feature map by applying a feature-wise linear modulations (hence the name FiLM) based on conditioning information. For a input feature and vector containing conditioning information can be given as,

(2)
(3)

where,

Here, and are neural networks with parameters and

. Tensors

, , have the same shape and represents the Hadamard product. is used for conditioning, and referred to as label condition vector in the paper. Also, the notation is used to describe the latent vector conditioned on the label condition vector , i.e, .

The decoder ( with parameters ) is expected to perfectly reconstruct the original input when conditioned on the label condition vector matching the class identity of the input, referred here as the match condition vector (), can be viewed as a traditional auto-encoder. However, here is additionally trained to poorly reconstruct the original input when conditioned on the label condition vector, that does not match the class identity of the input, referred here as the non-match condition vector (). The importance of this additional constraint on the decoder is discussed in Sec. 3.2.2 while modeling the reconstruction errors using EVT. For the rest of this paper, we use superscript and to indicate match and non-match, respectively.

Now, for a given input from the batch and and , for any random sampled from , be its corresponding match and non-match condition vectors, the feed forward path for stage-2 can be summarized through the following equations,

Following the above feed-forward path, the loss functions in the second stage of training to train the decoder ( with parameters ) and conditioning layer (with parameters and ) are given as follows,

(4)
(5)
(6)

Here, the loss function corresponds to the constraint that output generated using match condition vector , should be perfect reconstruction of . Whereas, the loss function corresponds to the constraint that output generated using non match condition vector , should have poor reconstruction. To enforce the later condition, another batch , is sampled from the training data, such that new batch does not have class identity consistent with the match condition vector. This in effect achieves the goal of poor reconstruction when conditioned . This conditioning strategy in a way, emulates open-set behavior (as will be discussed further in Sec. 3.2.2). Here, the network is specifically trained to produce poor reconstructions when class identity of an input image does not match the condition vector. So, when encountered with an unknown class test sample, ideally none of the condition vector would match the input image class identity. This will result in poor reconstruction for all condition vectors. While, when encountered with the known test sample, as one of the condition vector will match the input image class identity, it will produce a perfect reconstruction for that particular condition vector. Hence, training with the non-match loss helps the network adapt better to open-set setting. Here, and are weighted with .

3.2.2 EVT Modeling

Extreme Value Theory. Extreme value theory is often used in many visual recognition systems and is an effective tool for modeling post-training scores [36], [37]. It has been used in many applications such as finance, railway track inspection etc. [23], [1], [9] as well as open-set recognition [4], [36], [40]. In this paper we follow the Picklands-Balkema-deHaan formulation [33], [2]

of the extreme value theorem. It considers modeling probabilities conditioned on random variable exceeding a high threshold. For a given random variable

with a cumulative distribution function (CDF)

the conditional CDF for any exceeding the threshold is defined as,

where, denotes probability measure function. Now, given I.I.D. samples, , the extreme value theorem [33] states that, for large class of underlying distributions and given a large enough , can be well approximated by the Generalized Pareto Distribution (GPD),

(7)

such that , , and . is CDF of GPD and for

, reduces to the exponential distribution with parameter

and for takes the form of Pareto distribution [6].

Parameter Estimation. When modeling the tail of any distribution as GPD, the main challenge is in finding the tail parameter

to get the conditional CDF. However, it is possible to find an estimated value of

using mean excess function (MEF), i.e., [37]. It has been shown that for GPD, MEF holds a linear relationship with . Many researchers use this property of GPD to estimate the value of [37], [29]. Here, the algorithm for finding , introduced in [29] for GPD is adopted with minor modifications. See [29], [37] for more details regarding MEF or tail parameter estimation. After getting an estimate for , since from extreme value theorem [33], we know that set , follows GPD distribution, rest of the parameters for GPD, i.e. and can be easily estimated using the maximum likelihood estimation techniques [11], except for some rarely observed cases [5].

3.2.3 Threshold Calculation

After training procedure described in previous sections, Sec. 3.1 and Sec. 3.2, set of match and non-match reconstruction errors are created from training set, , and their corresponding match and non match labels, and . Let, be the match reconstruction error and be the non match reconstruction error for the input , then the set of match and non match errors can be calculated as,

Typical histograms of (set of match reconstruction errors) and (set of non-match reconstruction errors) are shown in Fig. 2(a). Note that the elements in these sets are calculated solely based on what is observed during training (i.e., without utilizing any unknown samples). Fig. 2(b) shows the normalized histogram of the reconstruction errors observed during inference from the test samples of known class set , and unknown class set . Comparing these figures in Fig. 3, it can be observed that the distribution of and computed during training, provides a good approximation for the error distributions observed during inference, for test samples from known set and unknown set . This observation also validates that non match training emulates an open-set test scenario (also discussed in Sec. 3.2) where the input does not match any of the class labels. This motivates us to use and to find an operating threshold for open-set recognition to make a decision about any test sample being known/unknown.

Now, It is safe to assume that the optimal operating threshold () lies in the region . Here, the underlying distributions of and are not known. However, as explained in 3.2.2, it is possible to model the tails of (right tail) and (left tail) with GPD as and with being a CDF. Though, GPD is only defined for modeling maxima, before fitting left tail of we perform inverse transform as

. Assuming the prior probability of observing unknown samples is

, the probability of errors can be formulated as a function of the threshold ,

Solving the above equation should give us an operating threshold that can minimize the probability of errors for a given model and can be solved by a simple line search algorithm by searching for in the range . Here, the accurate estimation of depends on how well and represent the known and unknown error distributions. It also depends on the prior probability , effect of this prior will be further discussed in Sec. 4.3.

3.3 Open-set Testing by k-inference (Stage 3)

Here, we introduce the open-set testing algorithm for proposed method. The testing procedure is described in Algo. 1 and an overview of this is also shown in Fig. 2. This testing strategy involves conditioning the decoder -times with all possible condition vectors to get reconstruction errors. Hence, it is referred as -inference algorithm.

4 Experiments and Results

In this section we evaluate the performance of the proposed approach and compare it with the state of the art open-set recognition methods. The experiments in Sec. 4.2, we measure the ability of algorithm to identify test samples as known or unknown without considering operating threshold. In second set of experiments in Sec. 4.3, we measure overall performance (evaluated using F-measure) of open-set recognition algorithm. Additionally through ablation experiments, we analyze contribution from each component of the proposed method.

4.1 Implementation Details

1:Trained network models , , , ,
2:Threshold from EVT model
3:Test image , condition vectors
4:Latent space representation,
5:Prediction probabilities,
6:predict known label,
7:for do
8:
9:
10:
11:end for
12:
13:if do
14: predict as Known, with label
15:else do
16: predict as Unknown
17:end if
Algorithm 1 k-Inference Algorithm

We use Adam optimizer [17] with learning rate and batch size, =. The parameter , described in Sec. 3.2, is set equal to 0.9. For all the experiments, conditioning layer networks and are a single layer fully connected neural networks. Another important factor affecting open-set performance is openness of the problem. Defined by Scheirer et al. [35], it quantifies how open the problem setting is,

(8)

where, is the number of training classes seen during training, is the number of test classes that will be observed during testing, is the number of target classes that needs to be correctly recognized during testing. We evaluate performance over multiple openness value depending on the experiment and dataset.

4.2 Experiment I : Open-set Identification

The evaluation protocol defined in [24]

is considered and area under ROC (AUROC) is used as evaluation metric. AUROC provides a calibration free measure and characterizes the performance for a given score by varying threshold. The encoder, decoder and classifier architecture for this experiment is similar to the architecture used by

[24] in their experiments. Following the protocol in [24], we report the AUROC averaged over five randomized trials.

4.2.1 Datasets

Method MNIST SVHN CIFAR10 CIFAR+10 CIFAR+50 TinyImageNet
SoftMax 0.978 0.886 0.677 0.816 0.805 0.577
OpenMax [4] (CVPR’16) 0.981 0.894 0.695 0.817 0.796 0.576
G-OpenMax [8] (BMVC’17) 0.984 0.896 0.675 0.827 0.819 0.580
OSRCI [24] (ECCV’18) 0.988 0.910 0.699 0.838 0.827 0.586
Proposed Method 0.989 0.922 0.895 0.955 0.937 0.748
Table 1: AUROC for open-set identification, values other than the proposed method are taken from [24].

Here, we provide summary of these protocols for each dataset,

MNIST, SVHN, CIFAR10. For MNIST [21], SVHN [25] and CIFAR10 [18], openness of the problem is set to , by randomly sampling 6 known classes and 4 unknown classes.
CIFAR+10, CIFAR+50. For CIFAR+ experiments, 4 classes are sampled from CIFAR10 for training. non overlapping classes are used as the unknowns, which are sampled from the CIFAR100 dataset [18]. Openness of the problem for CIFAR+10 and CIFAR+50 is and , respectively.
TinyImageNet. For experiments with the TinyImageNet [20], 20 known classes and 180 unknown classes with openness are randomly sampled for evaluation.

4.2.2 Comparison with state-of-the-art

For comparing the open-set identification performance, we consider the following methods:

I. SoftMax : SoftMax score of a predicted class is used for open-set identification.
II. OpenMax [4]: The score of + class and score of the predicted class is used for open-set identification.
III. G-OpenMax [8]: It is a data augmentation technique, which utilizes the OpenMax scores after training the network with the generated data.
IV. OSRCI [24]: Another data augmentation technique called counterfactual image generation is used for training the network for + class classification. We refer to this method as Open-set Recognition using Counterfactual Images (OSRCI). The score value is used for open-set identification.

Results corresponding to this experiment are shown in Table 2. As seen from this table, the proposed method outperform the other methods, showing that open-set identification training in proposed approach learns better scores for identifying unknown classes. From the results, we see that our method on the digits dataset produces a minor improvement compared to the other recent methods. This is mainly do the reason that results on the digits dataset are almost saturated. On the other hand, our method performs significantly better than the other recent methods on the object datasets such as CIFAR and TinyImageNet.

4.3 Experiment II : Open-set Recognition

(a) F-measure comparisons for the open-set recognition experiment.
(b) F-measure comparisons for the ablation study.
Figure 4: Performance evaluation on the LFW dataset.

This experiment shows the overall open-set recognition performance evaluated with F-measure. For this experiment we consider LFW Face dataset [22]. We extend the protocol introduced in [35]

for open-set face recognition on LFW. Total 12 classes containing more than 50 images are considered as known classes and divided into training and testing split by 80/20 ratio. Image size is kept to 64

64. Since, LFW has 5717 number of classes, we vary the openness from to by taking 18 to 5705 unknown classes during testing. Since, many classes contain only one image, instead of random sampling, we sort them according to the number of images per class and add it sequentially to increase the openness. It is obvious that with the increase in openness, the probability of observing unknown will also increase. Hence, it is reasonable to assume that prior probability will be a function of openness. For this experiment we set .

4.3.1 Comparison with state-of-the-art

For comparing the open-set recognition performance, we consider the following methods:

I. W-SVM (PAMI’14) : W-SVM is used as formulated in [35], which trains Weibull calibrated SVM classifier for open set recognition.
II. SROR (PAMI’16) : SROR is used as formulated in [40]. It uses sparse representation-based framework for open-set recognition.
III. DOC (EMNLP’16) : It utilizes a novel sigmoid-based loss function for training a deep neural network [38].

To have a fair comparison with these methods, we use features extracted from the encoder (

) to train W-SVM and SROR. For DOC, the encoder () is trained with the loss function proposed in [38]. Experiments on LFW are performed using a U-Net [34] inspired encoder-decoder architecture. More details regarding network architecture is included in the supplementary material.

Results corresponding to this experiment is shown in Fig.  3(a). From this figure, we can see that the proposed approach remains relatively stable with the increase in openness, outperforming all other methods. One interesting trend noticed here is, that DOC initially performs better than the statistical methods such as W-SVM and SROR. However with openness more than 50% the performance suffers significantly. While the statistical methods though initially perform poor compared to DOC, but remain relatively stable and performs better than DOC as the openness is increased (especially over 50%).

4.3.2 Ablation Study

In this section, we present ablation analysis of the proposed approach on the LFW Face dataset. The contribution of individual components to the overall performance of the method is reported by creating multiple baselines of the proposed approach. Starting with the most simple baseline, i.e., thresholding SoftMax probabilities of a closed-set model, each component is added building up to the proposed approach. Detailed descriptions of these baselines are given as follows,
I. CLS : Encoder and the classifier are trained for -class classification. Samples with probability score prediction less than 0.5 are classified as unknown.
II. CLS+DEC : In this baseline, only the networks , and the decoder are trained as described in Sec. 3, except is only trained with match loss function, . Samples with more than 95% of maximum train reconstruction error observed, are classified as unknown.
III. Naive : Here, the networks , and and the conditioning layer networks ( and ) are trained as described in Sec. 3, but instead of modeling the scores using EVT as described in Sec. 3.2.2, threshold is directly estimated from the raw reconstruction errors.
IV. Proposed method (p = 0.5) : , , and condition layer networks ( and ) are trained as described in Sec. 3 and to find the threshold prior probability of observing unknown is set to .
V. Proposed method: Method proposed in this paper, with set as described in Sec. 4.3.

Results corresponding to the ablation study are shown in Fig. 3(b). Being a simple SoftMax thresholding baseline, CLS has weakest performance. However, when added with a match loss function () as in CLS+DEC

, the open-set identification is performed using reconstruction scores. Since, it follows a heuristic way of thresholding, the performance degrades rapidly as openness increases. However, addition of non match loss function (

), as in the Naive baseline, helps find a threshold value without relying on heuristics. As seen from the Fig. 3(b) performance of Naive baseline remains relatively stable with increase in openness, showing the importance of loss function . Proposed method with fixed to 0.5, introduces EVT modeling on reconstruction errors to calculate a better operating threshold. It can be seen from the Fig. 3(b), such strategy improves over finding threshold based on raw score values. This shows importance applying EVT models on reconstruction errors. Now, if is set to , as in the proposed method, there is a marginal improvement over the fixed baseline. This shows benefit of setting as a function of openness. It is interesting to note that for large openness values (as ), both fixed baseline and proposed method achieve similar performance.

5 Conclusion

We presented an open-set recognition algorithm based on class conditioned auto-encoders. We introduced training and testing strategy for these networks. It was also shown that dividing the open-set recognition into sub tasks helps learn a better score for open-set identification. During training, enforcing conditional reconstruction constraints are enforced, which helps learning approximate known and unknown score distributions of reconstruction errors. Later, this was used to calculate an operating threshold for the model. Since inference for a single sample needs feed-forwards, it suffers from increased test time. However, the proposed approach performs well across multiple image classification datasets and providing significant improvements over many state of the art open-set algorithms. In our future research, generative models such as GAN/VAE/FLOW can be explored to modify this method. We will revise the manuscript with such details in the conclusion.

Acknowledgements

This research is based upon work supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via IARPA R&D Contract No. 2014-14071600012. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the ODNI, IARPA, or the U.S. Government.

References

6 Supplementary Material for C2AE: Class Conditioned Auto-Encoder for Open-set Recognition

This contains the supplementary material for the paper C2AE: Class Conditioned Auto-Encoder for Open-set Recognition. Due to the space limitations in the submitted paper, we provide some additional details regarding the proposed method.

6.1 Toy Examples

To see the decision boundaries learned using the proposed approach, we perform few experiments with 2-Dimensional toy data. For these experiments the encoder, decoder and classifier architectures are FC(2)-Sig-FC(5)-Sig, FC(5)-Sig-FC(2) and FC(5)-Sig-FC(2), respectively. Here, FC(T) indicates fully connected layer with T hidden units, Sig is the sigmoid activation. We train these networks using the proposed approach for three different variations of 2-Dimensional datasets, namely Two-Gauss, Four-Gauss and Uni-Gauss. Two-Gauss and Four-Gauss have two and four 2D Gaussians with different means and same variance, respectively. Whereas Uni-Gauss has one class as 2D Gaussian and another classes Uniformly distributed. As it can be seen from Fig. 

5, the proposed approach is able to learn tight boundaries surrounding the data points and identify all of the remaining space as unknown.

(a) Two-Gauss
(b) Uni-Gauss
(c) Four-Gauss
Figure 5: Toy Examples.

6.2 Results

Here we present the AUROC table for open-set identification with standard deviation values. Standard deviation values were not available for CIFAR+10, CIFAR+50 and TinyImageNet as the values are taken from

[24].

Method MNIST SVHN CIFAR10 CIFAR+10 CIFAR+50 TinyImageNet
SoftMax 0.978 0.002 0.886 0.006 0.677 0.032 0.816 0.805 0.577
OpenMax (CVPR’16) 0.981 0.002 0.894 0.008 0.695 0.032 0.817 0.796 0.576
G-OpenMax (BMVC’17) 0.984 0.001 0.896 0.006 0.675 0.035 0.827 0.819 0.580
OSRCI (ECCV’18) 0.988 0.001 0.910 0.006 0.699 0.029 0.838 0.827 0.586
Proposed Method 0.989 0.002 0.922 0.009 0.895 0.008 0.955 0.006 0.937 0.004 0.748 0.005
Table 2: AUROC for open-set identification, values other than the proposed method are taken from [24]. Standard deviation value for state of the art not available for CIFAR+10, CIFAR+50 and TinyImageNet.

6.3 Histogram Progression

Fig. 6 and Fig. 7, provides evolution of reconstruction errors during the learning procedure. The reconstruction errors for match, non match, known and unknown are provided at iteration 1, 5k and 500k. As it can be seen from Fig. 5(a), since the network is initialized with random weights, the reconstruction errors for match and non match are not discriminative. However, since the network is trained to learn the discriminative reconstructions for match and non match conditioning, with the increase in iterations, the reconstruction errors become more discriminative as seen from the Fig. 5(b) and Fig. 5(c). As a result, known and unknown reconstruction errors follow the same trend as that of match and non match as evident from the Fig. 7. The SVHN dataset is used for generating the normalized histograms of match, non match, known and unknown data reconstruction errors.

(a)
(b)
(c)
Figure 6: Progression of Match and Non match data reconstruction error distribution with training iterations for SVHN.
(a)
(b)
(c)
Figure 7: Progression of Known and Unknown data reconstruction error distribution with training iterations for SVHN.

6.4 Network Architecture

The network architecture for the LFW experiments is shown in the Fig. 8. It is an U-Net inspired network architecture with a FiLM conditioning layer in the middle. The network architecture is as follows,
C(64)-C(128)-C(256)-C(512)-C(1024)-FiLM-DC(2048)-DC(1024)-DC(512)-DC(256)-DC(124)-DC(3)-Tanh.

Here, C(T) represents T channel convolution layer followed by instance normalization and leaky ReLU activation. DC(T) represents T channel transposed convolution layer followed by instance normalization and Upsampling. FiLM layer is a conditioning layer which modulates feature maps from C(1024) with linear modulation parameters

and of size 102422, based on label conditioning vector. Here, the convolution blocks are used as encoder and deconvolution blocks are used as decoder. As explained in the proposed approach, the encoder weights are frozen during training in stage-2. The classifier network for the experiments with the LFW dataset is a single layer fully connected network with 12 hidden units (same as number of known classes).

Figure 8: U-Net based Architecture used for LFW experiments.