Estimating Multi-label Accuracy using Labelset Distributions

09/09/2022
by   Laurence A. F. Park, et al.
0

A multi-label classifier estimates the binary label state (relevant vs irrelevant) for each of a set of concept labels, for any given instance. Probabilistic multi-label classifiers provide a predictive posterior distribution over all possible labelset combinations of such label states (the powerset of labels) from which we can provide the best estimate, simply by selecting the labelset corresponding to the largest expected accuracy, over that distribution. For example, in maximizing exact match accuracy, we provide the mode of the distribution. But how does this relate to the confidence we may have in such an estimate? Confidence is an important element of real-world applications of multi-label classifiers (as in machine learning in general) and is an important ingredient in explainability and interpretability. However, it is not obvious how to provide confidence in the multi-label context and relating to a particular accuracy metric, and nor is it clear how to provide a confidence which correlates well with the expected accuracy, which would be most valuable in real-world decision making. In this article we estimate the expected accuracy as a surrogate for confidence, for a given accuracy metric. We hypothesise that the expected accuracy can be estimated from the multi-label predictive distribution. We examine seven candidate functions for their ability to estimate expected accuracy from the predictive distribution. We found three of these to correlate to expected accuracy and are robust. Further, we determined that each candidate function can be used separately to estimate Hamming similarity, but a combination of the candidates was best for expected Jaccard index and exact match.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/17/2020

Partial Multi-label Learning with Label and Feature Collaboration

Partial multi-label learning (PML) models the scenario where each traini...
research
06/01/2022

One Positive Label is Sufficient: Single-Positive Multi-Label Learning with Label Enhancement

Multi-label learning (MLL) learns from the examples each associated with...
research
06/28/2022

SLOVA: Uncertainty Estimation Using Single Label One-Vs-All Classifier

Deep neural networks present impressive performance, yet they cannot rel...
research
07/03/2019

Towards Interpretable Deep Extreme Multi-label Learning

Many Machine Learning algorithms, such as deep neural networks, have lon...
research
10/27/2018

Handling Imbalanced Dataset in Multi-label Text Categorization using Bagging and Adaptive Boosting

Imbalanced dataset is occurred due to uneven distribution of data availa...
research
10/18/2018

HierLPR: Decision making in hierarchical multi-label classification with local precision rates

In this article we propose a novel ranking algorithm, referred to as Hie...
research
05/02/2018

Semantic Channel and Shannon's Channel Mutually Match for Multi-Label Classification

A group of transition probability functions form a Shannon's channel whe...

Please sign up or login with your details

Forgot password? Click here to reset