Benchmarking Deep AUROC Optimization: Loss Functions and Algorithmic Choices

03/27/2022
by   Dixian Zhu, et al.
0

The area under the ROC curve (AUROC) has been vigorously applied for imbalanced classification and moreover combined with deep learning techniques. However, there is no existing work that provides sound information for peers to choose appropriate deep AUROC maximization techniques. In this work, we fill this gap from three aspects. (i) We benchmark a variety of loss functions with different algorithmic choices for deep AUROC optimization problem. We study the loss functions in two categories: pairwise loss and composite loss, which includes a total of 10 loss functions. Interestingly, we find composite loss, as an innovative loss function class, shows more competitive performance than pairwise loss from both training convergence and testing generalization perspectives. Nevertheless, data with more corrupted labels favors a pairwise symmetric loss. (ii) Moreover, we benchmark and highlight the essential algorithmic choices such as positive sampling rate, regularization, normalization/activation, and optimizers. Key findings include: higher positive sampling rate is likely to be beneficial for deep AUROC maximization; different datasets favors different weights of regularizations; appropriate normalization techniques, such as sigmoid and ℓ_2 score normalization, could improve model performance. (iii) For optimization aspect, we benchmark SGD-type, Momentum-type, and Adam-type optimizers for both pairwise and composite loss. Our findings show that although Adam-type method is more competitive from training perspective, but it does not outperform others from testing perspective.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/24/2020

Normalized Loss Functions for Deep Learning with Noisy Labels

Robust loss functions are essential for training accurate deep neural ne...
research
10/28/2022

Evaluating the Impact of Loss Function Variation in Deep Learning for Classification

The loss function is arguably among the most important hyperparameters f...
research
07/05/2023

Loss Functions and Metrics in Deep Learning. A Review

One of the essential components of deep learning is the choice of the lo...
research
11/19/2019

Partial AUC optimization based deep speaker embeddings with class-center learning for text-independent speaker verification

Deep embedding based text-independent speaker verification has demonstra...
research
08/08/2022

Pairwise Learning via Stagewise Training in Proximal Setting

The pairwise objective paradigms are an important and essential aspect o...
research
03/29/2021

Score-oriented loss (SOL) functions

Loss functions engineering and the assessment of forecasting performance...
research
01/17/2022

On Training Targets and Activation Functions for Deep Representation Learning in Text-Dependent Speaker Verification

Deep representation learning has gained significant momentum in advancin...

Please sign up or login with your details

Forgot password? Click here to reset