Learning Bounds for Open-Set Learning

by   Zhen Fang, et al.

Traditional supervised learning aims to train a classifier in the closed-set world, where training and test samples share the same label space. In this paper, we target a more challenging and realistic setting: open-set learning (OSL), where there exist test samples from the classes that are unseen during training. Although researchers have designed many methods from the algorithmic perspectives, there are few methods that provide generalization guarantees on their ability to achieve consistent performance on different training samples drawn from the same distribution. Motivated by the transfer learning and probably approximate correct (PAC) theory, we make a bold attempt to study OSL by proving its generalization error-given training samples with size n, the estimation error will get close to order O_p(1/√(n)). This is the first study to provide a generalization bound for OSL, which we do by theoretically investigating the risk of the target classifier on unknown classes. According to our theory, a novel algorithm, called auxiliary open-set risk (AOSR) is proposed to address the OSL problem. Experiments verify the efficacy of AOSR. The code is available at github.com/Anjin-Liu/Openset_Learning_AOSR.



There are no comments yet.


page 32

page 33


Open-set Recognition via Augmentation-based Similarity Learning

The primary assumption of conventional supervised learning or classifica...

EvidentialMix: Learning with Combined Open-set and Closed-set Noisy Labels

The efficacy of deep learning depends on large-scale data sets that have...

Close Category Generalization

Out-of-distribution generalization is a core challenge in machine learni...

Unsupervised Learning of the Set of Local Maxima

This paper describes a new form of unsupervised learning, whose input is...

Characterizing and Understanding the Generalization Error of Transfer Learning with Gibbs Algorithm

We provide an information-theoretic analysis of the generalization abili...

Minimizers of the Empirical Risk and Risk Monotonicity

Plotting a learner's average performance against the number of training ...

Towards Robust Waveform-Based Acoustic Models

We propose an approach for learning robust acoustic models in adverse en...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.