Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

06/08/2019
by   Ehsan Amid, et al.
1

We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization. Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our two temperature generalization of logistic regression, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method using the Tsallis divergence.

READ FULL TEXT
research
05/19/2017

Two-temperature logistic regression based on the Tsallis divergence

We develop a variant of multiclass logistic regression that achieves thr...
research
07/09/2023

On the sample complexity of estimation in logistic regression

The logistic regression model is one of the most popular data generation...
research
01/18/2023

An Analysis of Loss Functions for Binary Classification and Regression

This paper explores connections between margin-based loss functions and ...
research
04/10/2012

Coherence Functions with Applications in Large-Margin Classification Methods

Support vector machines (SVMs) naturally embody sparseness due to their ...
research
05/03/2021

Effective temperatures for single particle system under dichotomous noise

Three different definitions of effective temperature – 𝒯_ k, 𝒯_ i and 𝒯_...
research
02/17/2023

Smoothly Giving up: Robustness for Simple Models

There is a growing need for models that are interpretable and have reduc...

Please sign up or login with your details

Forgot password? Click here to reset