Robust Linear Classification from Limited Training Data

10/04/2021
by   Deepayan Chakrabarti, et al.
0

We consider the problem of linear classification under general loss functions in the limited-data setting. Overfitting is a common problem here. The standard approaches to prevent overfitting are dimensionality reduction and regularization. But dimensionality reduction loses information, while regularization requires the user to choose a norm, or a prior, or a distance metric. We propose an algorithm called RoLin that needs no user choice and applies to a large class of loss functions. RoLin combines reliable information from the top principal components with a robust optimization to extract any useful information from unreliable subspaces. It also includes a new robust cross-validation that is better than existing cross-validation methods in the limited-data setting. Experiments on 25 real-world datasets and three standard loss functions show that RoLin broadly outperforms both dimensionality reduction and regularization. Dimensionality reduction has 14%-40% worse test loss on average as compared to RoLin. Against L_1 and L_2 regularization, RoLin can be up to 3x better for logistic loss and 12x better for squared hinge loss. The differences are greatest for small sample sizes, where RoLin achieves the best loss on 2x to 3x more datasets than any competing method. For some datasets, RoLin with 15 training samples is better than the best norm-based regularization with 1500 samples.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/08/2018

Feature Dimensionality Reduction for Video Affect Classification: A Comparative Study

Affective computing has become a very important research area in human-m...
research
06/18/2012

Dimensionality Reduction by Local Discriminative Gaussians

We present local discriminative Gaussian (LDG) dimensionality reduction,...
research
02/23/2022

A Dimensionality Reduction Method for Finding Least Favorable Priors with a Focus on Bregman Divergence

A common way of characterizing minimax estimators in point estimation is...
research
09/11/2023

Data efficiency, dimensionality reduction, and the generalized symmetric information bottleneck

The Symmetric Information Bottleneck (SIB), an extension of the more fam...
research
06/27/2012

Regularizers versus Losses for Nonlinear Dimensionality Reduction: A Factored View with New Convex Relaxations

We demonstrate that almost all non-parametric dimensionality reduction m...
research
03/04/2015

Class Probability Estimation via Differential Geometric Regularization

We study the problem of supervised learning for both binary and multicla...
research
05/24/2019

Perturbed Model Validation: A New Framework to Validate Model Relevance

This paper introduces PMV (Perturbed Model Validation), a new technique ...

Please sign up or login with your details

Forgot password? Click here to reset