GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning

03/02/2022
by   Qingsong Zhao, et al.
0

Cross entropy (CE) loss integrated with softmax is an orthodox component in most classification-based frameworks, but it fails to obtain an accurate probability distribution of predicted scores that is critical for further decision-making of poor-classified samples. The prediction score calibration provides a solution to learn the distribution of predicted scores which can explicitly make the model obtain a discriminative representation. Considering the entropy function can be utilized to measure the uncertainty of predicted scores. But, the gradient variation of it is not in line with the expectations of model optimization. To this end, we proposed a general Gaussian Score Calibrating (GSC) loss to calibrate the predicted scores produced by the deep neural networks (DNN). Extensive experiments on over 10 benchmark datasets demonstrate that the proposed GSC loss can yield consistent and significant performance boosts in a variety of visual tasks. Notably, our label-independent GSC loss can be embedded into common improved methods based on the CE loss easily.

READ FULL TEXT
research
02/19/2020

Being Bayesian about Categorical Probability

Neural networks utilize the softmax as a building block in classificatio...
research
09/28/2018

Confidence Calibration in Deep Neural Networks through Stochastic Inferences

We propose a generic framework to calibrate accuracy and confidence (sco...
research
05/24/2023

SELFOOD: Self-Supervised Out-Of-Distribution Detection via Learning to Rank

Deep neural classifiers trained with cross-entropy loss (CE loss) often ...
research
09/01/2022

Federated Learning with Label Distribution Skew via Logits Calibration

Traditional federated optimization methods perform poorly with heterogen...
research
05/18/2021

Label Inference Attacks from Log-loss Scores

Log-loss (also known as cross-entropy loss) metric is ubiquitously used ...
research
12/02/2018

Image Score: How to Select Useful Samples

There has long been debates on how we could interpret neural networks an...
research
05/31/2019

Optimized Score Transformation for Fair Classification

This paper considers fair probabilistic classification where the outputs...

Please sign up or login with your details

Forgot password? Click here to reset