DeepAI AI Chat
Log In Sign Up

GSC Loss: A Gaussian Score Calibrating Loss for Deep Learning

03/02/2022
by   Qingsong Zhao, et al.
Tongji University
Zhejiang University
Microsoft
0

Cross entropy (CE) loss integrated with softmax is an orthodox component in most classification-based frameworks, but it fails to obtain an accurate probability distribution of predicted scores that is critical for further decision-making of poor-classified samples. The prediction score calibration provides a solution to learn the distribution of predicted scores which can explicitly make the model obtain a discriminative representation. Considering the entropy function can be utilized to measure the uncertainty of predicted scores. But, the gradient variation of it is not in line with the expectations of model optimization. To this end, we proposed a general Gaussian Score Calibrating (GSC) loss to calibrate the predicted scores produced by the deep neural networks (DNN). Extensive experiments on over 10 benchmark datasets demonstrate that the proposed GSC loss can yield consistent and significant performance boosts in a variety of visual tasks. Notably, our label-independent GSC loss can be embedded into common improved methods based on the CE loss easily.

READ FULL TEXT
02/19/2020

Being Bayesian about Categorical Probability

Neural networks utilize the softmax as a building block in classificatio...
09/28/2018

Confidence Calibration in Deep Neural Networks through Stochastic Inferences

We propose a generic framework to calibrate accuracy and confidence (sco...
09/01/2022

Federated Learning with Label Distribution Skew via Logits Calibration

Traditional federated optimization methods perform poorly with heterogen...
11/02/2020

Optimize what matters: Training DNN-HMM Keyword Spotting Model Using End Metric

Deep Neural Network–Hidden Markov Model (DNN-HMM) based methods have bee...
05/18/2021

Label Inference Attacks from Log-loss Scores

Log-loss (also known as cross-entropy loss) metric is ubiquitously used ...
12/02/2018

Image Score: How to Select Useful Samples

There has long been debates on how we could interpret neural networks an...
05/31/2019

Optimized Score Transformation for Fair Classification

This paper considers fair probabilistic classification where the outputs...