TENGraD: Time-Efficient Natural Gradient Descent with Exact Fisher-Block Inversion

06/07/2021
by   Saeed Soori, et al.
22

This work proposes a time-efficient Natural Gradient Descent method, called TENGraD, with linear convergence guarantees. Computing the inverse of the neural network's Fisher information matrix is expensive in NGD because the Fisher matrix is large. Approximate NGD methods such as KFAC attempt to improve NGD's running time and practical application by reducing the Fisher matrix inversion cost with approximation. However, the approximations do not reduce the overall time significantly and lead to less accurate parameter updates and loss of curvature information. TENGraD improves the time efficiency of NGD by computing Fisher block inverses with a computationally efficient covariance factorization and reuse method. It computes the inverse of each block exactly using the Woodbury matrix identity to preserve curvature information while admitting (linear) fast convergence rates. Our experiments on image classification tasks for state-of-the-art deep neural architecture on CIFAR-10, CIFAR-100, and Fashion-MNIST show that TENGraD significantly outperforms state-of-the-art NGD methods and often stochastic gradient descent in wall-clock time.

READ FULL TEXT

page 9

page 18

page 19

research
02/03/2016

A Kronecker-factored approximate Fisher matrix for convolution layers

Second-order optimization methods such as natural gradient descent have ...
research
12/03/2014

New insights and perspectives on the natural gradient method

Natural gradient descent is an optimization method traditionally motivat...
research
06/14/2021

NG+ : A Multi-Step Matrix-Product Natural Gradient Method for Deep Learning

In this paper, a novel second-order method called NG+ is proposed. By fo...
research
03/19/2015

Optimizing Neural Networks with Kronecker-factored Approximate Curvature

We propose an efficient method for approximating natural gradient descen...
research
12/04/2017

Natural Langevin Dynamics for Neural Networks

One way to avoid overfitting in machine learning is to use model paramet...
research
01/25/2022

Efficient Approximations of the Fisher Matrix in Neural Networks using Kronecker Product Singular Value Decomposition

Several studies have shown the ability of natural gradient descent to mi...
research
08/15/2019

Accelerated CNN Training Through Gradient Approximation

Training deep convolutional neural networks such as VGG and ResNet by gr...

Please sign up or login with your details

Forgot password? Click here to reset