Boosting in the presence of outliers: adaptive classification with non-convex loss functions

10/05/2015
by   Alexander Hanbo Li, et al.
0

This paper examines the role and efficiency of the non-convex loss functions for binary classification problems. In particular, we investigate how to design a simple and effective boosting algorithm that is robust to the outliers in the data. The analysis of the role of a particular non-convex loss for prediction accuracy varies depending on the diminishing tail properties of the gradient of the loss -- the ability of the loss to efficiently adapt to the outlying data, the local convex properties of the loss and the proportion of the contaminated data. In order to use these properties efficiently, we propose a new family of non-convex losses named γ-robust losses. Moreover, we present a new boosting framework, Arch Boost, designed for augmenting the existing work such that its corresponding classification algorithm is significantly more adaptable to the unknown data contamination. Along with the Arch Boosting framework, the non-convex losses lead to the new class of boosting algorithms, named adaptive, robust, boosting (ARB). Furthermore, we present theoretical examples that demonstrate the robustness properties of the proposed algorithms. In particular, we develop a new breakdown point analysis and a new influence function analysis that demonstrate gains in robustness. Moreover, we present new theoretical results, based only on local curvatures, which may be used to establish statistical and optimization properties of the proposed Arch boosting algorithms with highly non-convex loss functions. Extensive numerical calculations are used to illustrate these theoretical properties and reveal advantages over the existing boosting methods when data exhibits a number of outliers.

READ FULL TEXT
research
01/19/2021

Unified Robust Boosting

Boosting is a popular machine learning algorithm in regression and class...
research
06/20/2017

SPLBoost: An Improved Robust Boosting Algorithm Based on Self-paced Learning

It is known that Boosting can be interpreted as a gradient descent techn...
research
02/20/2001

Non-convex cost functionals in boosting algorithms and methods for panel selection

In this document we propose a new improvement for boosting techniques as...
research
10/09/2019

Learning Near-optimal Convex Combinations of Basis Models with Generalization Guarantees

The problem of learning an optimal convex combination of basis models ha...
research
05/10/2011

Generalized Boosting Algorithms for Convex Optimization

Boosting is a popular way to derive powerful learners from simpler hypot...
research
06/05/2019

A Tunable Loss Function for Classification

Recently, a parametrized class of loss functions called α-loss, α∈ [1,∞]...
research
02/17/2023

Smoothly Giving up: Robustness for Simple Models

There is a growing need for models that are interpretable and have reduc...

Please sign up or login with your details

Forgot password? Click here to reset