Sketchy Empirical Natural Gradient Methods for Deep Learning

06/10/2020
by   Minghan Yang, et al.
10

In this paper, we develop an efficient sketchy empirical natural gradient method for large-scale finite-sum optimization problems from deep learning. The empirical Fisher information matrix is usually low-rank since the sampling is only practical on a small amount of data at each iteration. Although the corresponding natural gradient direction lies in a small subspace, both the computational cost and memory requirement are still not tractable due to the curse of dimensionality. We design randomized techniques for different neural network structures to resolve these challenges. For layers with a reasonable dimension, a sketching can be performed on a regularized least squares subproblem. Otherwise, since the gradient is a vectorization of the product between two matrices, we apply sketching on low-rank approximation of these matrices to compute the most expensive parts. Global convergence to stationary point is established under some mild assumptions. Numerical results on deep convolution networks illustrate that our method is quite competitive to the state-of-the-art methods such as SGD and KFAC.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2021

NG+ : A Multi-Step Matrix-Product Natural Gradient Method for Deep Learning

In this paper, a novel second-order method called NG+ is proposed. By fo...
research
06/17/2020

Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems

In this paper, we consider large-scale finite-sum nonconvex problems ari...
research
01/31/2020

On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

We revisit the use of Stochastic Gradient Descent (SGD) for solving conv...
research
03/16/2023

Decentralized Riemannian natural gradient methods with Kronecker-product approximations

With a computationally efficient approximation of the second-order infor...
research
02/13/2022

Efficient Natural Gradient Descent Methods for Large-Scale Optimization Problems

We propose an efficient numerical method for computing natural gradient ...
research
07/06/2022

Scaling Private Deep Learning with Low-Rank and Sparse Gradients

Applying Differentially Private Stochastic Gradient Descent (DPSGD) to t...
research
03/10/2016

Low-rank passthrough neural networks

Deep learning consists in training neural networks to perform computatio...

Please sign up or login with your details

Forgot password? Click here to reset