Sketchy Empirical Natural Gradient Methods for Deep Learning

06/10/2020
by   Minghan Yang, et al.
10

In this paper, we develop an efficient sketchy empirical natural gradient method for large-scale finite-sum optimization problems from deep learning. The empirical Fisher information matrix is usually low-rank since the sampling is only practical on a small amount of data at each iteration. Although the corresponding natural gradient direction lies in a small subspace, both the computational cost and memory requirement are still not tractable due to the curse of dimensionality. We design randomized techniques for different neural network structures to resolve these challenges. For layers with a reasonable dimension, a sketching can be performed on a regularized least squares subproblem. Otherwise, since the gradient is a vectorization of the product between two matrices, we apply sketching on low-rank approximation of these matrices to compute the most expensive parts. Global convergence to stationary point is established under some mild assumptions. Numerical results on deep convolution networks illustrate that our method is quite competitive to the state-of-the-art methods such as SGD and KFAC.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

06/14/2021

NG+ : A Multi-Step Matrix-Product Natural Gradient Method for Deep Learning

In this paper, a novel second-order method called NG+ is proposed. By fo...
06/17/2020

Structured Stochastic Quasi-Newton Methods for Large-Scale Optimization Problems

In this paper, we consider large-scale finite-sum nonconvex problems ari...
01/31/2020

On the Convergence of Stochastic Gradient Descent with Low-Rank Projections for Convex Low-Rank Matrix Problems

We revisit the use of Stochastic Gradient Descent (SGD) for solving conv...
03/19/2015

Optimizing Neural Networks with Kronecker-factored Approximate Curvature

We propose an efficient method for approximating natural gradient descen...
02/13/2022

Efficient Natural Gradient Descent Methods for Large-Scale Optimization Problems

We propose an efficient numerical method for computing natural gradient ...
03/10/2016

Low-rank passthrough neural networks

Deep learning consists in training neural networks to perform computatio...
07/12/2021

Nonlinear Least Squares for Large-Scale Machine Learning using Stochastic Jacobian Estimates

For large nonlinear least squares loss functions in machine learning we ...

Code Repositories

SENG

Sketchy Empirical Natural Gradient Methods for Deep Learning


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.