Smoothly Giving up: Robustness for Simple Models

02/17/2023
by   Tyler Sypherd, et al.
0

There is a growing need for models that are interpretable and have reduced energy and computational cost (e.g., in health care analytics and federated learning). Examples of algorithms to train such models include logistic regression and boosting. However, one challenge facing these algorithms is that they provably suffer from label noise; this has been attributed to the joint interaction between oft-used convex loss functions and simpler hypothesis classes, resulting in too much emphasis being placed on outliers. In this work, we use the margin-based α-loss, which continuously tunes between canonical convex and quasi-convex losses, to robustly train simple models. We show that the α hyperparameter smoothly introduces non-convexity and offers the benefit of "giving up" on noisy training examples. We also provide results on the Long-Servedio dataset for boosting and a COVID-19 survey dataset for logistic regression, highlighting the efficacy of our approach across multiple relevant domains.

READ FULL TEXT

page 27

page 29

page 40

page 41

page 42

research
01/18/2023

An Analysis of Loss Functions for Binary Classification and Regression

This paper explores connections between margin-based loss functions and ...
research
10/05/2015

Boosting in the presence of outliers: adaptive classification with non-convex loss functions

This paper examines the role and efficiency of the non-convex loss funct...
research
05/19/2017

Two-temperature logistic regression based on the Tsallis divergence

We develop a variant of multiclass logistic regression that achieves thr...
research
07/17/2018

Jensen: An Easily-Extensible C++ Toolkit for Production-Level Machine Learning and Convex Optimization

This paper introduces Jensen, an easily extensible and scalable toolkit ...
research
06/08/2019

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

We introduce a temperature into the exponential function and replace the...
research
04/26/2022

Bias-Variance Decompositions for Margin Losses

We introduce a novel bias-variance decomposition for a range of strictly...

Please sign up or login with your details

Forgot password? Click here to reset