Whitening and second order optimization both destroy information about the dataset, and can make generalization impossible

08/17/2020
by   Neha S. Wadia, et al.
0

Machine learning is predicated on the concept of generalization: a model achieving low error on a sufficiently large training set should also perform well on novel samples from the same distribution. We show that both data whitening and second order optimization can harm or entirely prevent generalization. In general, model training harnesses information contained in the sample-sample second moment matrix of a dataset. For a general class of models, namely models with a fully connected first layer, we prove that the information contained in this matrix is the only information which can be used to generalize. Models trained using whitened data, or with certain second order optimization schemes, have less access to this information; in the high dimensional regime they have no access at all, producing models that generalize poorly or not at all. We experimentally verify these predictions for several architectures, and further demonstrate that generalization continues to be harmed even when theoretical requirements are relaxed. However, we also show experimentally that regularized second order optimization can provide a practical tradeoff, where training is still accelerated but less information is lost, and generalization can in some circumstances even improve.

READ FULL TEXT
research
06/18/2020

When Does Preconditioning Help or Hurt Generalization?

While second order optimizers such as natural gradient descent (NGD) oft...
research
08/04/2023

Eva: A General Vectorized Approximation Framework for Second-order Optimization

Second-order optimization algorithms exhibit excellent convergence prope...
research
02/20/2020

Second Order Optimization Made Practical

Optimization in machine learning, both theoretical and applied, is prese...
research
10/28/2020

Second-Order Unsupervised Neural Dependency Parsing

Most of the unsupervised dependency parsers are based on first-order pro...
research
03/19/2019

On the weight distribution of second order Reed-Muller codes and their relatives

The weight distribution of second order q-ary Reed-Muller codes have bee...
research
10/27/2022

RePAST: A ReRAM-based PIM Accelerator for Second-order Training of DNN

The second-order training methods can converge much faster than first-or...
research
05/23/2018

Second-Order Occlusion-Aware Volumetric Radiance Caching

We present a second-order gradient analysis of light transport in partic...

Please sign up or login with your details

Forgot password? Click here to reset