A Surprising Linear Relationship Predicts Test Performance in Deep Networks

07/25/2018
by   Qianli Liao, et al.
10

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors? Better understanding of this question of generalization may improve practical applications of deep networks. In this paper we show that with cross-entropy loss it is surprisingly simple to induce significantly different generalization performances for two networks that have the same architecture, the same meta parameters and the same training error: one can either pretrain the networks with different levels of "corrupted" data or simply initialize the networks with weights of different Gaussian standard deviations. A corollary of recent theoretical results on overfitting shows that these effects are due to an intrinsic problem of measuring test performance with a cross-entropy/exponential-type loss, which can be decomposed into two components both minimized by SGD -- one of which is not related to expected classification performance. However, if we factor out this component of the loss, a linear relationship emerges between training and test losses. Under this transformation, classical generalization bounds are surprisingly tight: the empirical/training loss is very close to the expected/test loss. Furthermore, the empirical relation between classification error and normalized cross-entropy loss seem to be approximately monotonic

READ FULL TEXT

page 1

page 24

page 25

research
02/08/2023

Cut your Losses with Squentropy

Nearly all practical neural models for classification are trained using ...
research
06/02/2022

Introducing One Sided Margin Loss for Solving Classification Problems in Deep Networks

This paper introduces a new loss function, OSM (One-Sided Margin), to so...
research
11/23/2022

Using Focal Loss to Fight Shallow Heuristics: An Empirical Analysis of Modulated Cross-Entropy in Natural Language Inference

There is no such thing as a perfect dataset. In some datasets, deep neur...
research
12/25/2020

Adaptively Solving the Local-Minimum Problem for Deep Neural Networks

This paper aims to overcome a fundamental problem in the theory and appl...
research
12/07/2021

Understanding Square Loss in Training Overparametrized Neural Network Classifiers

Deep learning has achieved many breakthroughs in modern classification t...
research
05/20/2018

Algorithms and Theory for Multiple-Source Adaptation

This work includes a number of novel contributions for the multiple-sour...
research
04/01/2021

No Cost Likelihood Manipulation at Test Time for Making Better Mistakes in Deep Networks

There has been increasing interest in building deep hierarchy-aware clas...

Please sign up or login with your details

Forgot password? Click here to reset