DeepAI AI Chat
Log In Sign Up

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

by   Ehsan Amid, et al.

We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization. Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our two temperature generalization of logistic regression, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method using the Tsallis divergence.


Two-temperature logistic regression based on the Tsallis divergence

We develop a variant of multiclass logistic regression that achieves thr...

On the sample complexity of estimation in logistic regression

The logistic regression model is one of the most popular data generation...

An Analysis of Loss Functions for Binary Classification and Regression

This paper explores connections between margin-based loss functions and ...

Coherence Functions with Applications in Large-Margin Classification Methods

Support vector machines (SVMs) naturally embody sparseness due to their ...

Effective temperatures for single particle system under dichotomous noise

Three different definitions of effective temperature – 𝒯_ k, 𝒯_ i and 𝒯_...

Smoothly Giving up: Robustness for Simple Models

There is a growing need for models that are interpretable and have reduc...