Nonconvex Extension of Generalized Huber Loss for Robust Learning and Pseudo-Mode Statistics

02/22/2022
by   Kaan Gokcesu, et al.
0

We propose an extended generalization of the pseudo Huber loss formulation. We show that using the log-exp transform together with the logistic function, we can create a loss which combines the desirable properties of the strictly convex losses with robust loss functions. With this formulation, we show that a linear convergence algorithm can be utilized to find a minimizer. We further discuss the creation of a quasi-convex composite loss and provide a derivative-free exponential convergence rate algorithm.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/28/2021

Generalized Huber Loss for Robust Learning and its Efficient Minimization for a Robust Statistics

We propose a generalized formulation of the Huber loss. We show that wit...
research
06/22/2020

On the alpha-loss Landscape in the Logistic Model

We analyze the optimization landscape of a recently introduced tunable c...
research
10/31/2021

Efficient, Anytime Algorithms for Calibration with Isotonic Regression under Strictly Convex Losses

We investigate the calibration of estimations to increase performance wi...
research
07/16/2019

The Bregman-Tweedie Classification Model

This work proposes the Bregman-Tweedie classification model and analyzes...
research
04/26/2017

Stochastic Orthant-Wise Limited-Memory Quasi-Newton Methods

The ℓ_1-regularized sparse model has been popular in machine learning so...
research
05/19/2017

Two-temperature logistic regression based on the Tsallis divergence

We develop a variant of multiclass logistic regression that achieves thr...
research
06/08/2020

All your loss are belong to Bayes

Loss functions are a cornerstone of machine learning and the starting po...

Please sign up or login with your details

Forgot password? Click here to reset