K-FAC
Implementation of K-FAC in Python. Please see the paper https://arxiv.org/pdf/1503.05671.pdf
view repo
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
READ FULL TEXT
Second-order optimization methods such as natural gradient descent have ...
02/03/2016 ∙ by Roger Grosse, et al. ∙
0
∙
share
read it
A deep neural network is a hierarchical nonlinear model transforming inp...
08/22/2018 ∙ by Shun-ichi Amari, et al. ∙
0
∙
share
read it
Natural gradient descent is an optimization method traditionally motivat...
12/03/2014 ∙ by James Martens, et al. ∙
0
∙
share
read it
Adaptive stochastic gradient methods such as AdaGrad have gained popular...
11/21/2016 ∙ by Gabriel Krummenacher, et al. ∙
0
∙
share
read it
In this work we develop Curvature Propagation (CP), a general technique ...
06/27/2012 ∙ by James Martens, et al. ∙
0
∙
share
read it
Optimization algorithms that leverage gradient covariance information, s...
06/11/2018 ∙ by Thomas George, et al. ∙
0
∙
share
read it
A new method to represent and approximate rotation matrices is introduce...
04/29/2014 ∙ by Michael Mathieu, et al. ∙
0
∙
share
read it