Escaping Saddle Points for Effective Generalization on Class-Imbalanced Data

12/28/2022
by   Harsh Rangwani, et al.
9

Real-world datasets exhibit imbalances of varying types and degrees. Several techniques based on re-weighting and margin adjustment of loss are often used to enhance the performance of neural networks, particularly on minority classes. In this work, we analyze the class-imbalanced learning problem by examining the loss landscape of neural networks trained with re-weighting and margin-based techniques. Specifically, we examine the spectral density of Hessian of class-wise loss, through which we observe that the network weights converge to a saddle point in the loss landscapes of minority classes. Following this observation, we also find that optimization methods designed to escape from saddle points can be effectively used to improve generalization on minority classes. We further theoretically and empirically demonstrate that Sharpness-Aware Minimization (SAM), a recent technique that encourages convergence to a flat minima, can be effectively used to escape saddle points for minority classes. Using SAM results in a 6.2% increase in accuracy on the minority classes over the state-of-the-art Vector Scaling Loss, leading to an overall average increase of 4% across imbalanced datasets. The code is available at: https://github.com/val-iisc/Saddle-LongTail.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/18/2019

Learning Imbalanced Datasets with Label-Distribution-Aware Margin Loss

Deep learning algorithms can fare poorly when the training dataset suffe...
research
06/18/2021

RSG: A Simple but Effective Module for Learning Imbalanced Datasets

Imbalanced datasets widely exist in practice and area great challenge fo...
research
06/11/2022

Learning Imbalanced Datasets with Maximum Margin Loss

A learning algorithm referred to as Maximum Margin (MM) is proposed for ...
research
08/15/2023

ImbSAM: A Closer Look at Sharpness-Aware Minimization in Class-Imbalanced Recognition

Class imbalance is a common challenge in real-world recognition tasks, w...
research
03/26/2020

Negative Margin Matters: Understanding Margin in Few-shot Classification

This paper introduces a negative margin loss to metric learning based fe...
research
08/29/2023

From SMOTE to Mixup for Deep Imbalanced Classification

Given imbalanced data, it is hard to train a good classifier using deep ...
research
10/29/2021

Generalized Data Weighting via Class-level Gradient Manipulation

Label noise and class imbalance are two major issues coexisting in real-...

Please sign up or login with your details

Forgot password? Click here to reset