Robust Neural Network Classification via Double Regularization

12/15/2021
by   Olof Zetterqvist, et al.
0

The presence of mislabeled observations in data is a notoriously challenging problem in statistics and machine learning, associated with poor generalization properties for both traditional classifiers and, perhaps even more so, flexible classifiers like neural networks. Here we propose a novel double regularization of the neural network training loss that combines a penalty on the complexity of the classification model and an optimal reweighting of training observations. The combined penalties result in improved generalization properties and strong robustness against overfitting in different settings of mislabeled training data and also against variation in initial parameter values when training. We provide a theoretical justification for our proposed method derived for a simple case of logistic regression. We demonstrate the double regularization model, here denoted by DRFit, for neural net classification of (i) MNIST and (ii) CIFAR-10, in both cases with simulated mislabeling. We also illustrate that DRFit identifies mislabeled data points with very good precision. This provides strong support for DRFit as a practical of-the-shelf classifier, since, without any sacrifice in performance, we get a classifier that simultaneously reduces overfitting against mislabeling and gives an accurate measure of the trustworthiness of the labels.

READ FULL TEXT
research
11/28/2022

Malign Overfitting: Interpolation Can Provably Preclude Invariance

Learned classifiers should often possess certain invariance properties m...
research
09/26/2019

Convolutional Neural Networks with Dynamic Regularization

Regularization is commonly used in machine learning for alleviating over...
research
07/03/2021

Slope and generalization properties of neural networks

Neural networks are very successful tools in for example advanced classi...
research
08/25/2015

OCReP: An Optimally Conditioned Regularization for Pseudoinversion Based Neural Training

In this paper we consider the training of single hidden layer neural net...
research
10/11/2021

Disturbing Target Values for Neural Network Regularization

Diverse regularization techniques have been developed such as L2 regular...
research
12/18/2020

Classification with Strategically Withheld Data

Machine learning techniques can be useful in applications such as credit...
research
01/28/2019

Stiffness: A New Perspective on Generalization in Neural Networks

We investigate neural network training and generalization using the conc...

Please sign up or login with your details

Forgot password? Click here to reset