Stability for the Training of Deep Neural Networks and Other Classifiers

02/10/2020
by   Leonid Berlyand, et al.
0

We examine the stability of loss-minimizing training processes that are used for deep neural network (DNN) and other classifiers. While a classifier is optimized during training through a so-called loss function, the performance of classifiers is usually evaluated by some measure of accuracy, such as the overall accuracy which quantifies the proportion of objects that are well classified. This leads to the guiding question of stability: does decreasing loss through training always result in increased accuracy? We formalize the notion of stability, and provide examples of instability. Our main result is two novel conditions on the classifier which, if either is satisfied, ensure stability of training, that is we derive tight bounds on accuracy as loss decreases. These conditions are explicitly verifiable in practice on a given dataset. Our results do not depend on the algorithm used for training, as long as loss decreases with training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/16/2022

Stability of Accuracy for the Training of DNNs Via the Uniform Doubling Condition

We study the stability of accuracy for the training of deep neural netwo...
research
03/14/2023

Error estimates of deep learning methods for the nonstationary Magneto-hydrodynamics equations

In this study, we prove rigourous bounds on the error and stability anal...
research
12/08/2021

The perils of being unhinged: On the accuracy of classifiers minimizing a noise-robust convex loss

van Rooyen et al. introduced a notion of convex loss functions being rob...
research
10/14/2022

αQBoost: An Iteratively Weighted Adiabatic Trained Classifier

A new implementation of an adiabatically-trained ensemble model is deriv...
research
10/28/2019

IPGuard: Protecting the Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary

A deep neural network (DNN) classifier represents a model owner's intell...
research
06/08/2021

Coresets for Classification – Simplified and Strengthened

We give relative error coresets for training linear classifiers with a b...
research
03/14/2021

Pre-interpolation loss behaviour in neural networks

When training neural networks as classifiers, it is common to observe an...

Please sign up or login with your details

Forgot password? Click here to reset