ZLPR: A Novel Loss for Multi-label Classification

08/05/2022
by   Jianlin Su, et al.
1

In the era of deep learning, loss functions determine the range of tasks available to models and algorithms. To support the application of deep learning in multi-label classification (MLC) tasks, we propose the ZLPR (zero-bounded log-sum-exp & pairwise rank-based) loss in this paper. Compared to other rank-based losses for MLC, ZLPR can handel problems that the number of target labels is uncertain, which, in this point of view, makes it equally capable with the other two strategies often used in MLC, namely the binary relevance (BR) and the label powerset (LP). Additionally, ZLPR takes the corelation between labels into consideration, which makes it more comprehensive than the BR methods. In terms of computational complexity, ZLPR can compete with the BR methods because its prediction is also label-independent, which makes it take less time and memory than the LP methods. Our experiments demonstrate the effectiveness of ZLPR on multiple benchmark datasets and multiple evaluation metrics. Moreover, we propose the soft version and the corresponding KL-divergency calculation method of ZLPR, which makes it possible to apply some regularization tricks such as label smoothing to enhance the generalization of models.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/23/2020

Learning Gradient Boosted Multi-label Classification Rules

In multi-label classification, where the evaluation of predictions is le...
research
07/22/2020

Evolving Multi-label Classification Rules by Exploiting High-order Label Correlation

In multi-label classification tasks, each problem instance is associated...
research
12/13/2021

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Multi-label learning in the presence of missing labels (MLML) is a chall...
research
10/24/2017

A Correction Method of a Binary Classifier Applied to Multi-label Pairwise Models

In this work, we addressed the issue of applying a stochastic classifier...
research
01/02/2019

Multi-Label Adversarial Perturbations

Adversarial examples are delicately perturbed inputs, which aim to misle...
research
02/02/2021

Rank-Consistency Deep Hashing for Scalable Multi-Label Image Search

As hashing becomes an increasingly appealing technique for large-scale i...
research
09/23/2021

Unbiased Loss Functions for Multilabel Classification with Missing Labels

This paper considers binary and multilabel classification problems in a ...

Please sign up or login with your details

Forgot password? Click here to reset