Feature Whitening via Gradient Transformation for Improved Convergence

10/04/2020
by   Shmulik Markovich-Golan, et al.
0

Feature whitening is a known technique for speeding up training of DNN. Under certain assumptions, whitening the activations reduces the Fisher information matrix to a simple identity matrix, in which case stochastic gradient descent is equivalent to the faster natural gradient descent. Due to the additional complexity resulting from transforming the layer inputs and their corresponding gradients in the forward and backward propagation, and from repeatedly computing the Eigenvalue decomposition (EVD), this method is not commonly used to date. In this work, we address the complexity drawbacks of feature whitening. Our contribution is twofold. First, we derive an equivalent method, which replaces the sample transformations by a transformation to the weight gradients, applied to every batch of B samples. The complexity is reduced by a factor of S=(2B), where S denotes the feature dimension of the layer output. As the batch size increases with distributed training, the benefit of using the proposed method becomes more compelling. Second, motivated by the theoretical relation between the condition number of the sample covariance matrix and the convergence speed, we derive an alternative sub-optimal algorithm which recursively reduces the condition number of the latter matrix. Compared to EVD, complexity is reduced by a factor of the input feature dimension M. We exemplify the proposed algorithms with ResNet-based networks for image classification demonstrated on the CIFAR and Imagenet datasets. Parallelizing the proposed algorithms is straightforward and we implement a distributed version thereof. Improved convergence, in terms of speed and attained accuracy, can be observed in our experiments.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/02/2020

On the Global Convergence of Training Deep Linear ResNets

We study the convergence of gradient descent (GD) and stochastic gradien...
research
01/08/2021

Towards Accelerating Training of Batch Normalization: A Manifold Perspective

Batch normalization (BN) has become a crucial component across diverse d...
research
12/07/2018

Nonlinear Conjugate Gradients For Scaling Synchronous Distributed DNN Training

Nonlinear conjugate gradient (NLCG) based optimizers have shown superior...
research
12/03/2014

New insights and perspectives on the natural gradient method

Natural gradient descent is an optimization method traditionally motivat...
research
02/21/2019

Interplay Between Optimization and Generalization of Stochastic Gradient Descent with Covariance Noise

The choice of batch-size in a stochastic optimization algorithm plays a ...
research
10/21/2019

Faster Stochastic Algorithms via History-Gradient Aided Batch Size Adaptation

Various schemes for adapting batch size have been recently proposed to a...
research
01/15/2016

Faster Asynchronous SGD

Asynchronous distributed stochastic gradient descent methods have troubl...

Please sign up or login with your details

Forgot password? Click here to reset