Complementary-Label Learning for Arbitrary Losses and Models

10/10/2018
by   Takashi Ishida, et al.
0

In contrast to the standard classification paradigm where the true (or possibly noisy) class is given to each training pattern, complementary-label learning only uses training patterns each equipped with a complementary label. This only specifies one of the classes that the pattern does not belong to. The seminal paper on complementary-label learning proposed an unbiased estimator of the classification risk that can be computed only from complementarily labeled data. However, it required a restrictive condition on the loss functions, making it impossible to use popular losses such as the softmax cross-entropy loss. Recently, another formulation with the softmax cross-entropy loss was proposed with consistency guarantee. However, this formulation does not explicitly involve a risk estimator. Thus model/hyper-parameter selection is not possible by cross-validation---we may need additional ordinarily labeled data for validation purposes, which is not available in the current setup. In this paper, we give a novel general framework of complementary-label learning, and derive an unbiased risk estimator for arbitrary losses and models. We further improve the risk estimator by non-negative correction and demonstrate its superiority through experiments.

READ FULL TEXT
research
01/13/2020

Multi-Complementary and Unlabeled Learning for Arbitrary Losses and Models

A weakly-supervised learning framework named as complementary-label lear...
research
12/30/2019

Learning from Multiple Complementary Labels

Complementary-label learning is a new weakly-supervised learning framewo...
research
05/22/2017

Learning from Complementary Labels

Collecting labeled data is costly and thus a critical bottleneck in real...
research
06/20/2023

A Universal Unbiased Method for Classification from Aggregate Observations

In conventional supervised classification, true labels are required for ...
research
07/07/2021

On Codomain Separability and Label Inference from (Noisy) Loss Functions

Machine learning classifiers rely on loss functions for performance eval...
research
07/13/2020

Implementing the ICE Estimator in Multilayer Perceptron Classifiers

This paper describes the techniques used to implement the ICE estimator ...
research
03/29/2021

von Mises-Fisher Loss: An Exploration of Embedding Geometries for Supervised Learning

Recent work has argued that classification losses utilizing softmax cros...

Please sign up or login with your details

Forgot password? Click here to reset