One-Class Meta-Learning: Towards Generalizable Few-Shot Open-Set Classification

09/14/2021 ∙ by Jedrzej Kozerawski, et al. ∙ The Regents of the University of California Toyota Technological Institute at Chicago 0

Real-world classification tasks are frequently required to work in an open-set setting. This is especially challenging for few-shot learning problems due to the small sample size for each known category, which prevents existing open-set methods from working effectively; however, most multiclass few-shot methods are limited to closed-set scenarios. In this work, we address the problem of few-shot open-set classification by first proposing methods for few-shot one-class classification and then extending them to few-shot multiclass open-set classification. We introduce two independent few-shot one-class classification methods: Meta Binary Cross-Entropy (Meta-BCE), which learns a separate feature representation for one-class classification, and One-Class Meta-Learning (OCML), which learns to generate one-class classifiers given standard multiclass feature representation. Both methods can augment any existing few-shot learning method without requiring retraining to work in a few-shot multiclass open-set setting without degrading its closed-set performance. We demonstrate the benefits and drawbacks of both methods in different problem settings and evaluate them on three standard benchmark datasets, miniImageNet, tieredImageNet, and Caltech-UCSD-Birds-200-2011, where they surpass the state-of-the-art methods in the few-shot multiclass open-set and few-shot one-class tasks.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 2

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

(a) Few-Shot One-Class (b) Few-Shot Multiclass Open-Set
Figure 1: (a) Few-Shot One-Class classification and (b) Few-Shot Multiclass Open-Set problems. The training set (top) contains few (one in this case) labeled training examples from a single category (a) or five categories (b). The test set (bottom) contains unseen examples from known categories present in the training set treated as a known categories (Known - bottom left) and examples from unknown categories not present in the training set (Unknown - bottom right). The goal is to train a model given examples from the training set such that it would be able to differentiate between examples coming from the known categories (Known examples) and from other, unknown categories (Unknown examples), while simultaneously correctly assigning examples predicted as Known into one of known categories.

Deep learning methods are able to achieve high performance on large-scale visual recognition tasks [huang2017densely, russakovsky2015imagenet, szegedy2016rethinking, he2016deep]

, but the quality of the learned representations greatly depends on the amount of available training data. In many classification tasks this quantity is not sufficient to properly train a neural network that would generalize well to unseen examples. Obtaining more training data is often not feasible for numerous reasons, such as the natural rarity of specific categories (

e.g., rare diseases or events), necessity of fast adaptation to novel tasks (e.g., early detection of new disease such as COVID-19 or recognition of a new type of car by the autonomous driving system), financial constraints, and researchers/scientists needing to train a specialized model on their own small-scale dataset.

Few-shot learning [ravi2016optimization, snell2017prototypical] is a problem in image classification dealing with small sets of training data per class, where the task is to train a classifier on data from a small, labeled support set (consisting of data from categories, each represented with examples – -shot -way classification) and use a query set to assess the quality of this classifier. Recent meta-learning approaches [snell2017prototypical, ye2020fewshot, gidaris2018dynamic], designed for better generalization capabilities of trained models to unseen data, help increase the performance while simultaneously reducing the training time necessary to adjust to the novel data. However, existing approaches unfortunately assume that the query set has to contain examples only from the same categories as the (a closed-set setting), which is very restrictive. Recently Liu et al. [liu2020few] introduced the few-shot open-set setting, where during the inference time the query set might contain additional unknown open-set categories (that are not present in the support set ) that need to be differentiated from the known classes () before performing the multiclass classification step. Standard open-set approaches [bendale2016towards, scheirer2012toward, rudd2017extreme] assume access to high number of per-category examples in order to model their distribution and detect out-of-distribution samples (unknown ), however a small samples size in few-shot learning prevents those methods from working well (or frequently at all when number of per-category examples drops to one). This indicates the need for new meta-learning approaches capable of learning to detect the unknown examples. Initial approach introduced by Liu et al. [liu2020few] unfortunately provides only a relative ranking score for all examples without any method to clearly differentiate between known examples and unknown examples , provides low closed-set accuracy and cannot address a problem when since it operates on softmax scores. In this work we propose to address these issues by studying meta-learning approaches capable of out-of-distribution detection given only few training examples (few-shot one-class classification), and extending them to few-shot multiclass open-set problem setting (as seen in Fig. 1). Proposed approaches will have the following benefits:

  • They can be used to augment any existing few-shot multiclass classification approach (such as FEAT [ye2020fewshot] or PEELER [liu2020few]) to operate in an open-set setting without requiring retraining and without a performance drop in the closed-set setting.

  • After training, our approach can work in both the few-shot open-set and closed-set problem settings with any number of categories (-way) and per-category examples (-shot), even when both and (one-shot one-class).

  • They do not require separate background (unknown) categories present during training (contrary to existing open-set methods).

Our contributions in this work can be summarized as:

  • We present two novel meta-learning methods for few-shot one-class image classification that are capable of augmenting any existing few-shot multiclass classification approach to work in few-shot multiclass open-set classification setting.

  • We verify the value of the proposed approaches using few-shot one-class and few-shot multiclass open-set experiments by reporting performance on two benchmark few-shot datasets.

2 Related work

2.1 Few-shot classification

Few-shot classification refers to a problem where a model is trained to generalize to novel, unseen samples in the query set represented by a small number of examples in the support set . There are two main types of approaches when addressing this problem. One type of methods is concentrated around metric learning for better similarity and relation embeddings. Siamese networks [kochsiamese] compute the similarity score between two images, while Matching Networks [vinyals2016matching] learn classifiers for novel categories based on a mapping from a small support set of examples (input-label pairs) to a classifier for the given example. Snell et al. [snell2017prototypical] used Prototypical Networks for few-shot learning by representing each class by the mean of its examples in an embedding space learned by the neural network. Sung et al.[sung2018learning] presented Relation Networks that consist of two modules, an embedding module and a relation module, learning the appropriate relation between a query image and each of images in -way classification. Liu et al. [liu2020few] used a modified prototypical network approach to tackle the few-shot open-set problem. Qi et al. [qi2018low]

proposed to imprint the weights of a new classifier with an embedding vector extracted from a base classifier pre-trained on known categories. This idea is very similar to the Gidaris and Komodakis approach 

[gidaris2018dynamic]. Ye et al. [ye2020fewshot] proposed to use a transformer on top of the prototypical network embeddings to learn a better mapping for class representations. Another set of ideas focused on optimization approaches to solve this problem. Ravi and Larochelle [ravi2016optimization] used an LSTM to optimize updates while training a network on a meta-set consisting of multiple datasets. Li et al. [li2017meta]

presented a Meta-SGD approach based on Stochastic Gradient Descent. Finn 

et al. [finn2017model] introduced Model-Agnostic Meta-Learning (MAML).

2.2 One-class classification

In one-class classification only data from positive category is available during training. Schölkopf et al. [scholkopf2000support]

addressed this with a One-Class Support Vector Machine (OC-SVM) which maximizes the margin between the origin and one-class samples. Chen 

et al. [chen2001one]

also used the One-Class SVM in the image retrieval problem. Tax and Duin 

[tax1999data, tax1999support]

introduced Support Vector Data Description (SVDD) and later augmented the method by generating artificial outliers 

[tax2001uniform]. Ruff et al. [pmlr-v80-ruff18a]

proposed Deep Support Vector Data Description (Deep-SVDD) to train a deep feature extractor jointly with the one-class classification objective. Perera and Patel 

[perera2019learning]

introduced two loss functions (compactness and descriptiveness loss) together with a template matching matching framework for deep one-class classification. Sabokrou 

et al. [sabokrou2018adversarially] utilized a two-network architecture trained in a GAN-style adversatial learning framework for adversarially learned one-class classification. Kemmler et al. [kemmler2013one] utilized Gaussian Process (GP) priors for one-class classification. Kozerawski and Turk [kozerawski2018clear]

proposed CLEAR to predict the hyperparameters of a one-class SVM classifier given a single positive example (one-shot one-class classification).

2.3 Open-set classification

Open-set classification is a machine learning problem when during the inference stage the set of observable examples (the query set

) can include unknown examples coming from unknown categories apart from known examples coming from known categories (present in the support set . Scheirer et al. [scheirer2012toward] introduced a new variant of SVM (1-vs-set machine) based on the risk minimization. Bendale et al. [bendale2016towards]

proposed OpenMax, a method using extreme value theory to re-evaluate logit values. Ge 

et al. [ge2017generative] augmented OpenMax to a generative variant called G-OpenMax. Neal et al. [neal2018open] also proposed a generative approach sythesizing open-set images while training, which helps detecting unknown examples during the inference time. Liu et al. [liu2019large] introduced a method for long-tail open-set recognition with distance-based methods. Dhamija et al. [dhamija2018reducing] proposed two losses (Entropic Open-Set and Objectosphere) to maximize the difference in the softmax output for known and unknown samples. Yoshihashi et al. [yoshihashi2019classification] proposed CROSR (Classification-Reconstruction learning for Open-Set Recognition) where they jointly perform classification and reconstruction of the input data. Rudd et al. [rudd2017extreme]

introduced extreme value machines which utilize extreme value theory to model the probability of example coming from unknown category. Liu 

et al. [liu2020few] proposed a new problem setting of few-shot open-set recognition and utilized method based on Prototypical Networks [snell2017prototypical] combined with entropy-based loss function.

3 Meta Learning for Few-Shot Open-Set Classification

Many traditional open-set classification approaches [rudd2017extreme, liu2020energy, sun2020conditional, bendale2016towards] differentiate known from unknown examples by modeling the distribution of known classes and frequently focus on modeling the tail of this distribution using the Extreme Value Theory. In few-shot learning modeling the distribution of a known category and analyzing the tail of this distribution when the number of examples is close to zero is not feasible (or even not possible in case of number of examples ). Additionally, in few-shot learning the set of known categories is different during meta-training and meta-testing phase, thus preventing standard open-set methods from working efficiently. For these reasons there is a need for an open-set meta-learning approach that can work well in few-shot setting. Lack of sufficient training examples is also problematic for existing one-class classification approaches [scholkopf2000support, tax2004support, pmlr-v80-ruff18a] that need abundance of positive training examples to model well the distribution of the positive class in order to detect any anomalous examples (out-of-distribution detection). We propose to solve mentioned problems with an introduction of two separate few-shot one-class meta-learning approaches capable of detection of unknown examples in both one-class and multiclass open-set settings. Proposed methods work as separate modules that can augment any existing closed-set few-shot multiclass classification method to work in few-shot multiclass open-set setting without a need to retrain it.

Figure 2: An overview of OCML and Meta-BCE classification methods. Both approaches are independent and standalone modules (indicated with dashed lines) trained and utilized separately from each other. images belonging to the known support class

have their features extracted using a CNN

and a prototype representation is calculated, which is used for closed-set classification and as an input to OCML. In the example above, with two upper images ( blue and green) as the known examples from the support class , and the red example is a query example. One-class classification is performed either using either Eq. 2 (when using Meta-BCE) or Eq. 4 (when using OCML).

3.1 Meta Binary Cross-Entropy (Meta-BCE)

Let us divide the dataset by following a standard few-shot meta-learning setting [vinyals2016matching, ravi2016optimization] into three separate meta-sets: meta-training meta-set , meta-validation meta-set , and meta-testing meta-set . Each of the meta-sets has a separate, non-overlapping set of categories. We utilize episodic training as is a standard practice in few-shot learning [vinyals2016matching, snell2017prototypical, ye2020fewshot].

Following the work of Liu et al. [liu2020few], let us use a separate branch of the feature extractor indicated as , but instead of using it for multiclass features (as done by Liu) let us use it to learn a separate feature representation for one-class classification (instead of only multiclass classification). We hypothesize that high quality feature representation for one-class classification differs from that for multiclass classification (which is backed by our results). Standard multiclass classifiers answer the question “Which of the classes does this new, unknown example resemble the most?”. Answering this question depends highly on the composition of the few-shot task (i.e., how similar are the categories and how many of them there are). In order to eliminate that dependency, we need to learn a different feature space representation where the probability of a new example belonging or not belonging to the known category depends only on the known positive examples from that category irrespective of other categories. We propose to use a binary cross-entropy loss in a meta-learning setting to simultaneously learn a one-class feature representation on the branch and a one-class classification decision boundary:

(1)

where is a binary indicator (0 or 1) if class label is the correct classification for observation , and is the probability of the example belonging to the category :

(2)

where is a learnable parameter. You can see the feature extractor, , one-class branch , and Meta-BCE approach in Figure 2

3.2 One-Class Meta-Learning (OCML)

One-class classifiers should be ideally adapted to the class of interest, which in few-shot scenario is difficult to achieve given small sample size. We propose to use meta-learning to learn how to generate dynamically parameters of a one-class classifier for a few-shot category. Kozerawski and Turk addressed this issued by allowing to dynamically predict the parameters of a one-class SVM classifier [kozerawski2018clear]

, however the method was limited to only a one-shot setting as the transfer learning module was using a feature vector of a single image as an input, and the method had multiple, separate stages of training with a final classification limited to using SVMs or logistic regression. To overcome these limitation we propose One-Class Meta-Learning (OCML), a method that is able to dynamically create a one-class neural network classifier for a novel category with more than a single image in the support set

and is trained jointly with the feature extractor in a single stage. OCML has a transfer learning module that learns how to transform the feature representation of a category () to a weight vector of the one-class classifier for the category :

(3)

After obtaining the weight of the one-class classifier , we can calculate the probability of a novel example belonging to the category :

(4)

We use the binary cross-entropy loss using the probabilities calculated with the Eq. 4 to train jointly both networks and . The overview of OCML can be seen in Figure 2.

3.3 Few-Shot Open-Set

In the standard open-set setting we have access to an abundance of examples from known classes; however, in the few-shot open-set setup the known classes will be revealed in the meta-testing phase (as the model is trained on a separate, non-overlapping set of classes in meta-training). This means that the algorithm should be able to work on any given set of known categories. To address this issue, we utilize a divide-and-conquer approach and divide the problem of detecting samples not belonging to any of known categories into smaller problems of detecting samples not belonging to a single category (one-class problem). This allows the use of meta-learning more efficiently, as one-class learning depends only on positive examples from a single class, whereas multiclass open-set depends also on the composition of the task (what classes are known and how many of them). Additionally, few-shot one-class classification can be thought of as a special case of few-shot multiclass open-set classification, where in the -way -shot scenario, . In order to adapt both OCML and Meta-BCE to a multiclass scenario () we treat the few-shot open-set problem as an ensemble of few-shot one-class problems and modify the prediction equation for Meta-BCE:

(5)

and for OCML:

(6)

4 Experimental Results

4.1 Datasets and performance metrics

We conducted experiments on three benchmark datasets for few-shot learning: miniImageNet [vinyals2016matching]

, CUB-200-2011 

[WahCUB_200_2011], and tieredImageNet [ren2018meta]

. The miniImageNet dataset is a set of 100 categories (a subset of ImageNet categories 

[russakovsky2015imagenet]) with 600 images per category. We follow the meta-training/meta-validation/meta-testing split of the 100 categories to 64/16/20 [ravi2016optimization]. The CUB-200-2011 dataset has 200 visual categories and we follow meta-training/meta-validation/meta-testing split as in Ye et al. [ye2020fewshot] to 100/50/50 categories respectively. The tieredImageNet dataset has 608 visual categories and we follow meta-training/meta-validation/meta-testing split as in Ye et al. [ren2018meta] to 351/97/60 categories respectively.

To evaluate all methods we have used three metrics in the few-shot one-class settings: accuracy, F1-score, and AUROC (Area Under ROC curve) score. To evaluate the methods in the few-shot open-set setting we have used four metrics: closed-set accuracy (dubbed here accuracy), normalized accuracy (NA), F1-open score, and AUROC score. For details on the performance metrics used here please see Appendix A.

4.2 Few-Shot One-Class

In Table 1 we present the results for our two proposed approaches (Meta-BCE and OCML) on the newly-introduced few-shot one-class task on the miniImageNet dataset [vinyals2016matching]. We compare our methods with benchmark one-class approaches such as DeepSVDD [pmlr-v80-ruff18a], One-Class SVM [scholkopf2000support], SVDD [tax2004support], DeepAnomaly [golan2018deep], and with a one-shot one-class approach introduced by Kozerawski and Turk [kozerawski2018clear] (CLEAR). As hypothesised, standard many-shot one-class approaches do not translate well to the few-shot setting. DeepSVDD [pmlr-v80-ruff18a] and DeepAnomaly provide only a relative ranking score for all examples, thus allowing to calculate only AUROC score without accuracy or F1-score. Their AUROC scores are lower than SVDD’s [tax2004support] and OCSVM’s [scholkopf2000support], which would indicate that a deep network approach might require more training examples to work correctly. In the -shot setting, both OCSVM and SVDD completely overfit to the single examples resulting in a random classification performance, and CLEAR [kozerawski2018clear] has slightly better classification scores, but lower AUROC score. OCML has the best performance in the -shot setting with an accuracy of , F1-score of and second highest AUROC score of . Meta-BCE has the second highest performance in the -shot setting (accuracy of and F1-score of ) and the highest AUROC score (). In the -shot setting, Meta-BCE becomes the best method with accuracy of and F1-score of , better than OCML (accuracy and F1-score ). Both introduced methods (Meta-BCE and OCML) perform much better than any other approaches, with Meta-BCE performing the best in the -shot setting and OCML in the -shot setting. We also provide an upper-bound accuracy obtained using FEAT [ye2020fewshot] for a -shot -way classification setting when the unknown class is treated as a known one with provided training examples. Calculated upper-bound accuracy shows there is still room for progress as -shot accuracy is and -shot accuracy is . The difference in performance between threshold and Meta-BCE confirms that multiclass and one-class classification requires different feature representations. The results on the CUB-200-2011 dataset [WahCUB_200_2011] and on tieredImageNet [ren2018meta] follow the above conclusions, with OCML achieving best performance in the -shot setting ( F1-score on CUB and on tieredImageNet), and Meta-BCE performing best in the -shot setting on CUB ( F1-score), while OCML performs best on tieredImageNet ( F1-score). For details on the CUB-200-2011 and tieredImageNet results please see Appendix C.1.

Accuracy (%) F1-score AUROC
Method Arch. 1-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
CLEAR [kozerawski2018clear] Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) Conv64 - -
5-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) Conv64 - -
Table 1: Experimental results on miniImageNet dataset for few-shot one-class classification. The best results are shown in bold. Supervised two-class classification.

4.3 Few-Shot Open-Set

In Table 2 we provide the experimental results for the few-shot open-set task on the miniImageNet dataset [vinyals2016matching]. We compare our approaches with Gaussian Embedding (GaussE) [liu2020few] and PEELER [liu2020few] by Liu et al., where they introduced the few-shot open-set problem. We also compare our method with existing (non-few-shot) open-set state-of-the-art methods such as Open-Max [bendale2016towards], Counterfactual [neal2018open], Entropic Open-Set Loss [dhamija2018reducing], and Objectosphere Loss [dhamija2018reducing]. Following Liu et al. [liu2020few] we perform experiments with ResNet-18 network and in accordance to standard few-shot learning practices we add experiments with Conv64 backbone as well.

Accuracy (%) NA (%) F1-open AUROC
Method Arch. 1-shot
GaussE [liu2020few] + OpenMax [bendale2016towards] Res18 - -
GaussE [liu2020few] + Counterfactual [neal2018open] Res18 - -
GaussE [liu2020few] Res18 - -
PEELER [liu2020few] Res18 - -
PEELER [liu2020few] + threshold Res18
PEELER [liu2020few] + Entropic Loss [dhamija2018reducing] Res18
PEELER [liu2020few] + Objectosphere [dhamija2018reducing] Res18
PEELER [liu2020few] + Meta-BCE [ours] Res18
PEELER [liu2020few] + OCML [ours] Res18
FEAT [ye2020fewshot] + threshold Res18
FEAT [ye2020fewshot] + Meta-BCE [ours] Res18
FEAT [ye2020fewshot] + OCML [ours] Res18

PEELER [liu2020few] + threshold
Conv64
PEELER [liu2020few] + Entropic Loss [dhamija2018reducing] Conv64
PEELER [liu2020few] + Objectosphere [dhamija2018reducing] Conv64
PEELER [liu2020few] + Meta-BCE [ours] Conv64
PEELER [liu2020few] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold Conv64
FEAT [ye2020fewshot] + Meta-BCE [ours] Conv64
FEAT [ye2020fewshot] + OCML [ours] Conv64


5-shot
GaussE [liu2020few] + OpenMax [bendale2016towards] Res18 - -
GaussE [liu2020few] + Counterfactual [neal2018open] Res18 - -
GaussE [liu2020few] Res18 - -
PEELER [liu2020few] Res18 - -
PEELER [liu2020few] + threshold Res18
PEELER [liu2020few] + Entropic Loss [dhamija2018reducing] Res18
PEELER [liu2020few] + Objectosphere [dhamija2018reducing] Res18
PEELER [liu2020few] + Meta-BCE [ours] Res18
PEELER [liu2020few] + OCML [ours] Res18
FEAT [ye2020fewshot] + threshold Res18
FEAT [ye2020fewshot] + Meta-BCE [ours] Res18
FEAT [ye2020fewshot] + OCML [ours] Res18

PEELER [liu2020few] + threshold
Conv64
PEELER [liu2020few] + Entropic Loss [dhamija2018reducing] Conv64
PEELER [liu2020few] + Objectosphere [dhamija2018reducing] Conv64
PEELER [liu2020few] + Meta-BCE [ours] Conv64
PEELER [liu2020few] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold Conv64
FEAT [ye2020fewshot] + Meta-BCE [ours] Conv64
FEAT [ye2020fewshot] + OCML [ours] Conv64

Table 2: Experimental results on miniImageNet dataset for few-shot -way open-set classification with open-set categories. The best results are shown in bold. Results from Liu et al. [liu2020few]

First of all, it is worth noticing that PEELER and Gaussian Embedding (GaussE) introduced by Liu et al. [liu2020few] do not provide a method for clearly differentiating between known and unknown examples at inference time and only provide a relative AUROC score and accuracy on closed-set (known) examples. Additionally, their method operates on softmax scores thus making it not suitable for scenario when (few-shot one-class). We have reproduced PEELER approach using the original authors’ implementation and augmented it with different existing methods of detecting unknown examples in the query set: threshold method, Entropic Open-Set Loss [dhamija2018reducing], and Objectosphere Loss [dhamija2018reducing]. Out of all methods, Objectosphere Loss [dhamija2018reducing] performs the worst, obtaining the lowest normalized accuracy, F1-open score and AUROC score both in -shot and -shot settings with ResNet-18 and Conv64 networks, and results in a lower closed-set accuracy compared to other methods. This confirms that methods performing well in standard open-set tasks, do not have to transfer well to few-shot open-set tasks. In this specific scenario, Objectosphere [dhamija2018reducing] learns to produce features of lower magnitude for unseen classes, however, in the few-shot setting all classes in the meta-testing meta-set will be considered as unseen since there is no overlap in between categories present in the meta-training meta-set and meta-testing meta-set. Entropic Open-Set Loss [dhamija2018reducing] aims to increase entropy for unknown examples, which seems to translate better to few-shot setting compared to simple threshold or Objectosphere [dhamija2018reducing] for both settings (-shot and -shot) and both networks (ResNet-18 and Conv64), however it results in a lower closed-set accuracy.

When we augment PEELER [liu2020few] with an ensemble of our proposed few-shot one-class methods (Meta-BCE or OCML) we can significantly increase the open-set performance without influencing the closed-set accuracy. The results confirm the conclusion from the experiments on few-shot one-class task: Meta-BCE results in a better performance for larger values of (per-category examples) with normalized accuracy of (ResNet-18 and ) and OCML results in a better performance for smaller values of with a normalized accuracy of (ResNet-18 and ). The great benefit of our approaches is that they do not degrade the performance of closed-set accuracy (compared to Entropic Open-Set Loss [dhamija2018reducing] and Objectosphere [dhamija2018reducing]) as they work as separate modules trained on top of existing few-shot learning methods. Additionally, since OCML and Meta-BCE are trained as few-shot one-class methods, they do not require separate background (or unknown) categories present in the training set (contrary to PEELER [liu2020few], Entropic Open-Set Loss [dhamija2018reducing], Objectosphere [dhamija2018reducing] or OpenMax [bendale2016towards]).

Standard multiclass open-set methods such as PEELER, OpenMax, or Entropic Open-Set Loss cannot be adapted to the special case when number of known categories () drops to one (few-shot one-class setting) since all of them are softmax-based, which prevents them from working in the single-category scenario. Our proposed approaches tackle both problems with superior performance. Additionally, in the few-shot open-set setting, existing meta-learning models due to the training process are tailored to the closed-set setting, thus achieving worse performance on the open-set setting – as seen in Table 2 through the comparison of Prototypical Networks [snell2017prototypical] or FEAT [ye2020fewshot] when using the multiclass feature space (“+ theshold”) versus when using a one-class feature space (Meta-BCE).

Since both Meta-BCE and OCML can be added to any existing few-shot closed-set approaches to allow them to work in open-set settings, we combine them with state-of-the-art few-shot closed-set method (FEAT) proposed by Ye et al. [ye2020fewshot]. FEAT results with Meta-BCE and OCML significantly outperform all the other existing methods in all performance metrics with normalized accuracy of with OCML for and with Meta-BCE for (with ResNet-18). The results on the CUB-200-2011 dataset [WahCUB_200_2011] and on tieredImageNet [ren2018meta] uphold these conclusions, as FEAT + OCML achieves the best performance in the -shot setting ( normalized accuracy on CUB and on tieredImageNet) and FEAT + Meta-BCE performs best in the -shot setting on the CUB dataset ( normalized accuracy), while FEAT + OCML achieves the best performance on the tieredImageNet dataset ( normalized accuracy). For details on the CUB-200-2011 and tieredImageNet results, please see Appendix C.2.

5 Conclusions

We proposed two novel methods for few-shot one-class classification (Meta-BCE and OCML) that can augment any existing few-shot learning method (such as PEELER [liu2020few] or FEAT [ye2020fewshot]) to work in the few-shot open-set setting. These methods do not require retraining of the existing few-shot method, do not degrade its performance in the closed-set setting, and (contrary to existing open-set methods) do not require separate background categories present during the training phase. Our approaches surpass the state-of-the-art methods in few-shot one-class and few-shot multiclass open-set classification, with Meta-BCE performing better when the number of per-category examples is higher, and OCML performing optimally for smaller values of per-category examples. Training high-quality models quickly and efficiently with smaller amounts of data and with the ability to work in an open-set setting is an important future direction for machine learning.

References

Appendix A Performance metrics

Few-Shot One-Class metrics. For the few-shot one-class experiments we have used multiple metrics to capture the performance of tested models on the meta-testing meta-set: classification accuracy, AUROC (Area Under ROC curve) score, and F1-score.

Few-Shot Open-Set metrics. To obtain fair comparison with Liu et al. [liu2020few] we use the same metrics in the few-shot experiments: closed-set accuracy (dubbed here accuracy) and AUROC (Area Under ROC curve) score. However, these measures alone are not sufficient to capture the quality of open-set methods (impact of unknown examples on the classification performance on known or closed-set samples), and for this reason we will utilize two commonly used metrics in open-set classification introduced by Júnior et al. [junior2017nearest]: F1-open score and Normalized Accuracy. Where F1-open score can be calculated using following formula:

(7)
(8)

And Normalized Accuracy is:

(9)

where is a weight hyperparameter balancing the AKS and AUS (set here to ). The AKS is the Accuracy on Known Samples:

(10)

and AUS is the Accuracy on Unknown Samples:

(11)

where TP is the number of True Positives, TN is the number of True Negatives, FP is the number of False Positives, FN is the number of False Negatives, TU is the number of True Unknowns, and FU is the number of False Unknowns.

The proposed way of measuring AKS (Eq. 10

) in a multiclass scenario results in weighing heavily the number of True Negatives (TN) in the formula thus skewing the quality of the metric. We propose to utilize the following way of measuring the AKS, which is more frequently used in a multiclass classification:

(12)

where is number of known examples, is the true label of an i-th examples, is the predicted label for the i-th example, and is the indicator function, which returns 1 if the prediction matches the ground truth and 0 otherwise.

Open-Set Classification Rate Curve (OSCRC) metric introduced by Dhamija et al. [dhamija2018reducing] assumes that the open-set method does not provide a clear classification decision (whether a novel examples in known or unknown) and scans throug different softmax threshold values in order to produce the curve. Our approaches provide a clear classification decision (a single operating point on the OSCRC curve), thus calculation of OSCRC metric is not possible.

Appendix B Testing procedure

b.1 Few-Shot One-Class

Every method is tested using the same testing procedure. The testing procedure consists of testing episodes, each consisting of a support set in form of -way -shot setting, a query set with unseen examples from the category present in the support set and with examples from another, unknown category. All episodes contain data from the meta-testing meta-set . We report the average performance with a confidence interval. In all the experiments we have set , .

b.2 Few-Shot Open-Set

Every method is tested using the same testing procedure. The testing procedure consists of testing episodes, each consisting of a support set in form of -way -shot setting, a query set with per-category unseen examples from categories present in the support set and with per-category examples from unknown categories. All episodes contain data from the meta-testing meta-set . We report the average performance with a confidence interval. In all the experiments we have set , , and .

Appendix C Additional experiments: CUB-200-2011 and tieredImageNet

c.1 Few-Shot One-Class

In Tables 3 and 4 we present the results for few-shot one-class experiments on the CUB-200-2011 dataset [WahCUB_200_2011] and tieredImageNet dataset [ren2018meta] respectively. The performance of all methods confirms the conclusions from the miniImageNet experiments. Standard one-class approaches (DeepSVDD [pmlr-v80-ruff18a], DeepAnomaly [golan2018deep], OCSVM [scholkopf2000support], SVDD [tax2004support]) do not work well in the few-shot setting (especially when considering accuracy and F1-score metrics in -shot setting). Prototypical Networks [snell2017prototypical] with OCML perform the best in the -shot setting with accuracy, F1-score, and AUROC score on CUB dataset and with accuracy, F1-score, and AUROC score on tieredImageNet dataset. We can see that the supervised upper-bound calculated using FEAT [ye2020fewshot] still gives a lot of room for improvement with accuracy of on CUB and on tieredImageNet dataset. In the -shot setting, on CUB dataset OCML has the best accuracy () and AUROC score (), while Meta-BCE has the best F1-score of . On tieredImageNet OCML has the best accuracy (), highest F1-score () and AUROC score (). The supervised two-class upper-bound of FEAT [ye2020fewshot] rises higher as well to on CUB and to on tieredImageNet.

Accuracy (%) F1-score AUROC
Method Arch. 1-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
CLEAR [kozerawski2018clear] Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) Conv64 - -
5-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) Conv64 - -
Table 3: Experimental results on CUB-200-2011 dataset for few-shot one-class classification. The best results are shown in bold. Supervised two-class classification.
Accuracy (%) F1-score AUROC
Method Arch. 1-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
CLEAR [kozerawski2018clear] Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) ResNet12 - -
5-shot
Proto Net [snell2017prototypical] + DeepSVDD [pmlr-v80-ruff18a] Conv64 - -
Proto Net [snell2017prototypical] + DeepAnomaly [golan2018deep] Conv64 - -
Proto Net [snell2017prototypical] + SVDD [tax2004support] Conv64
Proto Net [snell2017prototypical] + OCSVM [scholkopf2000support] Conv64
Proto Net [snell2017prototypical] + Threshold Conv64
Proto Net [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Net [snell2017prototypical] + OCML [ours] Conv64
Upper-bound (supervised FEAT [ye2020fewshot]) ResNet12 - -
Table 4: Experimental results on tieredImageNet dataset for few-shot one-class classification. The best results are shown in bold. Supervised two-class classification.

c.2 Few-Shot Open-Set

In Tables 5 and 6 we can see the performance comparison of few-shot open-set methods on CUB-200-2011 dataset [WahCUB_200_2011] and tieredImageNet [ren2018meta]. We can see that both Meta-BCE and OCML do not degrade the closed-set accuracy (dubbed accuracy here), hence methods initially performing well in this setting (e.g. FEAT [ye2020fewshot]) keep their superior performance. On CUB dataset OCML performs the best in the -shot setting with normalized accuracy of and F1-open score of (with FEAT [ye2020fewshot]) and AUROC score of with Prototypical Networks [snell2017prototypical]. In the -shot setting the best performing method is Meta-BCE with normalized accuracy of and F1-open score of . Similarly to -shot setting, Prototypical Networks [snell2017prototypical] with OCML has the best AUROC score of . On tieredImageNet dataset OCML performs the best in the -shot setting with normalized accuracy of and F1-open score of (with FEAT [ye2020fewshot]). In the -shot setting the best performing method is OCML with normalized accuracy of and F1-open score of .

Accuracy (%) NA (%) F1-open AUROC
Method Arch. 1-shot
PEELER [liu2020few] Conv64 - -
PEELER [liu2020few] + threshold Conv64
Proto Nets [snell2017prototypical] + threshold Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold Conv64
FEAT [ye2020fewshot] + Meta-BCE [ours] Conv64
FEAT [ye2020fewshot] + OCML [ours] Conv64

5-shot
Proto Nets [snell2017prototypical] + OpenMax [bendale2016towards] Conv64
PEELER [liu2020few] Conv64 - -
PEELER [liu2020few] + threshold Conv64
Proto Nets [snell2017prototypical] + threshold Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold Conv64
FEAT [ye2020fewshot] + Meta-BCE [ours] Conv64
FEAT [ye2020fewshot] + OCML [ours] Conv64
Table 5: Experimental results on CUB-200-2011 dataset for few-shot -way open-set classification with open-set categories. The best results are shown in bold.
Accuracy (%) NA (%) F1-open AUROC
Method Arch. 1-shot
PEELER [liu2020few] Conv64 - -
Proto Nets [snell2017prototypical] + threshold Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold ResNet12
FEAT [ye2020fewshot] + Meta-BCE [ours] ResNet12
FEAT [ye2020fewshot] + OCML [ours] ResNet12

5-shot
Proto Nets [snell2017prototypical] + OpenMax [bendale2016towards] Conv64
PEELER [liu2020few] Conv64 - -
Proto Nets [snell2017prototypical] + threshold Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + OCML [ours] Conv64
FEAT [ye2020fewshot] + threshold ResNet12
FEAT [ye2020fewshot] + Meta-BCE [ours] ResNet12
FEAT [ye2020fewshot] + OCML [ours] ResNet12
Table 6: Experimental results on tieredImageNet dataset for few-shot -way open-set classification with open-set categories. The best results are shown in bold.

Appendix D Implementation details

We have utilized PyTorch 

[NEURIPS2019_9015] to train all models. In order to implement results with PEELER [liu2020few] and FEAT [ye2020fewshot] we have used the original implementations supplied by the authors of PEELER111https://github.com/BoLiu-SVCL/meta-open and FEAT222https://github.com/Sha-Lab/FEAT. We have used Prototypical Networks implementation introduced from the authors of FEAT as well. Meta-BCE uses a separate branch of the main feature extractor () which in case of Conv64 network is the last convnet block (2 last convolutional layers). OCML uses a transfer learning module which in case of a Conv64 architecture is a single fully connected linear layer with input and output dimension of . As both our methods augment existing few-shot closed-set approaches, all training hyperparameters are consistent with original hyperparameters introduced by the authors of the closed-set approaches (PEELER, FEAT or Prototypical Networks). We will release the code necessary to reproduce the results from this paper along all pre-trained models upon the publication.

Appendix E Ablation studies

e.1 Ocml  architecture

architecture name Layer 1 Layer 2
1 layer FC layer (1600, 1600) -
2 layers, middle dim 100 FC layer (1600, 100) FC layer (100, 1600)
2 layers, middle dim 500 FC layer (1600, 500) FC layer (500, 1600)
2 layers, middle dim 1000 FC layer (1600, 1000) FC layer (1000, 1600)
Table 7: Tested architectures for the transfer learning module for OCML approach.
(a) One-Class Accuracy (b) AUROC
Figure 3: Ablation study results for different architectures of the transfer learning module for OCML approach. Results for four different architectures are presented for different -shot scenarios (different sizes of the support set of the meta-testing meta-set ).

OCML method utilizes a transfer learning module transforming a class-level representation into the weights of a classifier for a category . We performed an ablation study comparing the impact of the architecture of the transfer learning module on the performance of the network. In Table 7 we can see four architectures of the transfer learning module being tested. All four architectures consist of only fully connected layers (FC) with input, output dimensions in the parenthesis. The ablation study is performed on miniImageNet dataset with the Conv64 network, which has a feature space dimensionality of , hence the basic architecture for has only a single FC layer keeping the dimensionality the same. With three additional architectures we have tested whether adding more layers to the transfer learning module is beneficial and what is the impact of the latent space dimensionality (middle dim in Table 7). All experiments were repeated five times and the average performance along the confidence interval was reported.

The results of the experiments for all four settings present in Table 7 are visible in Figure 3. On the left we can see the impact of the architecture on the accuracy of the method for various sizes of the support set category in the meta-testing meta-set (various in the -shot scenario). On the right of Figure 3 we can see the impact of the architecture on the AUROC score. The single-layer architecture achieves the best performance (both accuracy and AUROC score) across all in the -shot setting. Among the two-layer architectures, the one with the latent dimensionality of achieves the lowest performance across all , which might indicate that the low-dimensional space is not enough to reliably compress the necessary information. Two-layer architectures with a higher latent dimensionality (500 and 1000) have similar performance, although lower than the single-layer architecture. The performance of all the methods increases with with number of examples () in the support set of the meta-testing meta-set .

e.2 Meta-BCE separate branch

Accuracy (%) NA (%) F1-open AUROC
Method Arch. 1-shot
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64

5-shot
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Proto Nets [snell2017prototypical] + Meta-BCE [ours] Conv64
Table 8: Experimental results on CUB-200-2011 dataset for few-shot -way open-set classification with open-set categories.

The main method of Meta-BCE  is using an auxiliary branch to produce features for the one-class classifier. We can also modify Meta-BCE to use the main branch of the feature extractor to calculate the one-class feature vectors. In order to do this, we will use a separate module to transform the multiclass classification feature vector to a Meta-BCE one-class classification feature vector and use it for one-class predictions:

(13)

This version of the Meta-BCE will be called Meta-BCE. We can see the accuracy comparison of the above two approaches on the CUB-200-2011 dataset [WahCUB_200_2011] in Table 8. Both methods do not degrade the closed-set accuracy, however the results for Meta-BCE using a separate branch to calculate the embeddings () are significantly better than when using the main branch (Meta-BCE). This pertains to normalized accuracy and F1-open scores, where Meta-BCE achieves normalized accuracy in -shot setting (compared to by Meta-BCE) and in -shot setting (compared to by Meta-BCE). However, the AUROC scores are slightly better when using the main branch (Meta-BCE). This might indicate that multiclass embeddings obtained in the main branch of the feature extractor help slightly with the overall ranking of known vs unknown examples (as indicated by AUROC score), but a separate feature extraction branch leading to a separate, dedicated one-class embedding allows to obtain better separability of the feature space leading to better classification metrics (normalized accuracy and F1-open score).

e.3 Impact of -shot test setting

Number of per-category examples () has a big effect on the performance of every few-shot approach. In Figure 4 we provide a comparison between studied approaches where we showcase impact of number of per-category examples on four performance metrics (closed-set accuracy, AUROC score, normalized accuracy, and F1-open score) for a -way -shot classification with open-set (unknown) categories. The results are from experiments on the miniImageNet dataset [vinyals2016matching].

(a) Closed-Set Accuracy (b) AUROC
(d) Normalized Accuracy (e) F1-open score
Figure 4: Impact of (number of per-category examples) on the performance for -way -shot classification with open-set categories.

In Figure 4(a) we can see the comparison between methods and their impact on closed-set accuracy. Methods based on PEELER [liu2020few] have overall lower accuracy for all values of than those based on FEAT [ye2020fewshot]. Additionally, it is clear that both our methods (Meta-BCE and OCML) do not degrade closed-set accuracy, compared to Entropic Open-Set Loss [dhamija2018reducing]. In Figure 4(b) we can see the impact on AUROC score. FEAT [ye2020fewshot] + OCML has the best performance for all values of , closely followed by PEELER [liu2020few] + OCML and FEAT [ye2020fewshot] + Meta-BCE. Threshold-based methods and Entropic Open-Set Loss [dhamija2018reducing] have the lowest AUROC score across all values of . In Figure 4(c) we can see the normalized accuracy values for the tested methods. Important to see are the high values for OCML methods (with PEELER [liu2020few] and FEAT [ye2020fewshot]) when and a very high gain of both Meta-BCE methods (with PEELER [liu2020few] and FEAT [ye2020fewshot]) when increasing values of (surpassing OCML performance when ). Very similar behavior can be observed in Figure 4(d) for F1-open score.

e.4 Impact of -way test setting

We have performed also a more thorough analysis of studied methods when varying both number of categories and number of per-category examples in the open-set setting. We provide the results for this analysis below in Figure 5 (for closed-set accuracy), Figure 6 (for AUROC score), Figure 7 (for normalized accuracy), and Figure 8 (for F1-open score). In all experiments number of unknown categories () was equal to the number of known categories ().

When analyzing the closed-set accuracy (Figure 5) we can see that for all methods (as expected) higher number of per-category examples () increases the performance and higher number of categories () decreases the performance. Threshold, Meta-BCE, and OCML do not impact the closed-set performance (thus their plots are comparable) and Entropic Open-Set Loss [dhamija2018reducing] reduces the closed-set accuracy of PEELER [liu2020few]. AUROC score comparison (Figure 6 indicates that Meta-BCE and OCML has much higher scores for all values of and than original PEELER [liu2020few] or PEELER with Entropic Open-Set Loss [dhamija2018reducing]. And we can further increase the performance by substituting PEELER [liu2020few] with FEAT [ye2020fewshot] as the closed-set training method.

Figures showcasing the classification metrics (normalized accuracy in Figure 7 and F1-open score in Figure 8 indicate few interesting properties. Both PEELER [liu2020few] and PEELER with Entropic Open-Set Loss [dhamija2018reducing] have low scores, and for some values of they classify all query examples as unknown examples. The reason for this behavior is the discrepancy between the training setting of -way -shot and the test setting showcased in Figures and the fact that both methods are basing their classification decision (whether a sample is known or unknown) based on the threshold learned during the training phase. We can see that such problem does not occur in any other method. Another important property is the difference in behavior between OCML and Meta-BCE. OCML starts with high performance (when ) and slowly increases it when increases. Meta-BCE on the other hand has low initial performance (when ), but it has a very rapid gain in both normalized accuracy and F1-open score when increases achieving values for normalized accuracy above when and , which is higher than OCML, higher than possible with a simple thresholding, and higher than with Entropic Open-Set Loss [dhamija2018reducing].

(a) PEELER + threshold (b) PEELER + Entropic Loss (c) PEELER + Meta-BCE
(d) PEELER + OCML (e) FEAT + threshold (f) FEAT + Meta-BCE
(d) FEAT + OCML
Figure 5: Closed-set accuracy
(a) PEELER + threshold (b) PEELER + Entropic Loss (c) PEELER + Meta-BCE
(d) PEELER + OCML (e) FEAT + threshold (f) FEAT + Meta-BCE
(d) FEAT + OCML
Figure 6: AUROC score
(a) PEELER + threshold (b) PEELER + Entropic Loss (c) PEELER + Meta-BCE
(d) PEELER + OCML (e) FEAT + threshold (f) FEAT + Meta-BCE
(d) FEAT + OCML
Figure 7: Normalized accuracy
(a) PEELER + threshold (b) PEELER + Entropic Loss (c) PEELER + Meta-BCE
(d) PEELER + OCML (e) FEAT + threshold (f) FEAT + Meta-BCE
(d) FEAT + OCML
Figure 8: F1-open score