Delving into Semantic Scale Imbalance

12/30/2022
by   Yanbiao Ma, et al.
0

Model bias triggered by long-tailed data has been widely studied. However, measure based on the number of samples cannot explicate three phenomena simultaneously: (1) Given enough data, the classification performance gain is marginal with additional samples. (2) Classification performance decays precipitously as the number of training samples decreases when there is insufficient data. (3) Model trained on sample-balanced datasets still has different biases for different classes. In this work, we define and quantify the semantic scale of classes, which is used to measure the feature diversity of classes. It is exciting to find experimentally that there is a marginal effect of semantic scale, which perfectly describes the first two phenomena. Further, the quantitative measurement of semantic scale imbalance is proposed, which can accurately reflect model bias on multiple datasets, even on sample-balanced data, revealing a novel perspective for the study of class imbalance. Due to the prevalence of semantic scale imbalance, we propose semantic-scale-balanced learning, including a general loss improvement scheme and a dynamic re-weighting training framework that overcomes the challenge of calculating semantic scales in real-time during iterations. Comprehensive experiments show that dynamic semantic-scale-balanced learning consistently enables the model to perform superiorly on large-scale long-tailed and non-long-tailed natural and medical datasets, which is a good starting point for mitigating the prevalent but unnoticed model bias.

READ FULL TEXT

page 22

page 24

page 25

page 31

page 32

page 33

research
01/16/2019

Class-Balanced Loss Based on Effective Number of Samples

With the rapid increase of large-scale, real-world datasets, it becomes ...
research
03/22/2023

Curvature-Balanced Feature Manifold Learning for Long-Tailed Classification

To address the challenges of long-tailed classification, researchers hav...
research
03/23/2021

MetaSAug: Meta Semantic Augmentation for Long-Tailed Visual Recognition

Real-world training data usually exhibits long-tailed distribution, wher...
research
06/18/2023

Balanced Energy Regularization Loss for Out-of-distribution Detection

In the field of out-of-distribution (OOD) detection, a previous method t...
research
08/09/2020

Feature Space Augmentation for Long-Tailed Data

Real-world data often follow a long-tailed distribution as the frequency...
research
07/05/2022

DBN-Mix: Training Dual Branch Network Using Bilateral Mixup Augmentation for Long-Tailed Visual Recognition

There is a growing interest in the challenging visual perception task of...
research
03/11/2021

Towards Interpreting and Mitigating Shortcut Learning Behavior of NLU models

Recent studies indicate that NLU models are prone to rely on shortcut fe...

Please sign up or login with your details

Forgot password? Click here to reset