Asymmetric Loss For Multi-Label Classification

09/29/2020
by   Emanuel Ben-Baruch, et al.
0

Pictures of everyday life are inherently multi-label in nature. Hence, multi-label classification is commonly used to analyze their content. In typical multi-label datasets, each picture contains only a few positive labels, and many negative ones. This positive-negative imbalance can result in under-emphasizing gradients from positive labels during training, leading to poor accuracy. In this paper, we introduce a novel asymmetric loss ("ASL"), that operates differently on positive and negative samples. The loss dynamically down-weights the importance of easy negative samples, causing the optimization process to focus more on the positive samples, and also enables to discard mislabeled negative samples. We demonstrate how ASL leads to a more "balanced" network, with increased average probabilities for positive samples, and show how this balanced network is translated to better mAP scores, compared to commonly used losses. Furthermore, we offer a method that can dynamically adjust the level of asymmetry throughout the training. With ASL, we reach new state-of-the-art results on three common multi-label datasets, including achieving 86.6 tasks such as fine-grain single-label classification and object detection. ASL is effective, easy to implement, and does not increase the training time or complexity. Implementation is available at: https://github.com/Alibaba-MIIL/ASL.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/30/2022

Acknowledging the Unknown for Multi-label Learning with Single Positive Labels

Due to the difficulty of collecting exhaustive multi-label annotations, ...
research
04/10/2023

Asymmetric Polynomial Loss For Multi-Label Classification

Various tasks are reformulated as multi-label classification problems, i...
research
03/07/2023

GaussianMLR: Learning Implicit Class Significance via Calibrated Multi-Label Ranking

Existing multi-label frameworks only exploit the information deduced fro...
research
09/27/2021

Speeding-up One-vs-All Training for Extreme Classification via Smart Initialization

In this paper we show that a simple, data dependent way of setting the i...
research
07/08/2023

End-to-End Supervised Multilabel Contrastive Learning

Multilabel representation learning is recognized as a challenging proble...
research
10/20/2022

G2NetPL: Generic Game-Theoretic Network for Partial-Label Image Classification

Multi-label image classification aims to predict all possible labels in ...
research
11/25/2021

ML-Decoder: Scalable and Versatile Classification Head

In this paper, we introduce ML-Decoder, a new attention-based classifica...

Please sign up or login with your details

Forgot password? Click here to reset