K-FAC
Implementation of K-FAC in Python. Please see the paper https://arxiv.org/pdf/1503.05671.pdf
view repo
We propose an efficient method for approximating natural gradient descent in neural networks which we call Kronecker-Factored Approximate Curvature (K-FAC). K-FAC is based on an efficiently invertible approximation of a neural network's Fisher information matrix which is neither diagonal nor low-rank, and in some cases is completely non-sparse. It is derived by approximating various large blocks of the Fisher (corresponding to entire layers) as being the Kronecker product of two much smaller matrices. While only several times more expensive to compute than the plain stochastic gradient, the updates produced by K-FAC make much more progress optimizing the objective, which results in an algorithm that can be much faster than stochastic gradient descent with momentum in practice. And unlike some previously proposed approximate natural-gradient/Newton methods which use high-quality non-diagonal curvature matrices (such as Hessian-free optimization), K-FAC works very well in highly stochastic optimization regimes. This is because the cost of storing and inverting K-FAC's approximation to the curvature matrix does not depend on the amount of data used to estimate it, which is a feature typically associated only with diagonal or low-rank approximations to the curvature matrix.
READ FULL TEXT VIEW PDF
Second-order optimization methods such as natural gradient descent have ...
read it
We consider the development of practical stochastic quasi-Newton, and in...
read it
A deep neural network is a hierarchical nonlinear model transforming inp...
read it
Natural gradient descent is an optimization method traditionally motivat...
read it
Optimization algorithms that leverage gradient covariance information, s...
read it
A new method to represent and approximate rotation matrices is introduce...
read it
In this paper, we develop an efficient sketchy empirical natural gradien...
read it
Implementation of K-FAC in Python. Please see the paper https://arxiv.org/pdf/1503.05671.pdf
None
The problem of training neural networks is one of the most important and highly investigated ones in machine learning. Despite work on layer-wise pretraining schemes, and various sophisticated optimization methods which try to approximate Newton-Raphson updates or natural gradient updates, stochastic gradient descent (SGD), possibly augmented with momentum, remains the method of choice for large-scale neural network training
(Sutskever et al., 2013).From the work on Hessian-free optimization (HF) (Martens, 2010) and related methods (e.g. Vinyals and Povey, 2012)
we know that updates computed using local curvature information can make much more progress per iteration than the scaled gradient. The reason that HF sees fewer practical applications than SGD are twofold. Firstly, its updates are much more expensive to compute, as they involve running linear conjugate gradient (CG) for potentially hundreds of iterations, each of which requires a matrix-vector product with the curvature matrix (which are as expensive to compute as the stochastic gradient on the current mini-batch). Secondly, HF’s estimate of the curvature matrix must remain fixed while CG iterates, and thus the method is able to go through much less data than SGD can in a comparable amount of time, making it less well suited to stochastic optimizations.
As discussed in Martens and Sutskever (2012) and Sutskever et al. (2013), CG has the potential to be much faster at local optimization than gradient descent, when applied to quadratic objective functions. Thus, insofar as the objective can be locally approximated by a quadratic, each step of CG could potentially be doing a lot more work than each iteration of SGD, which would result in HF being much faster overall than SGD. However, there are examples of quadratic functions (e.g. Li, 2005)
, characterized by curvature matrices with highly spread-out eigenvalue distributions, where CG will have no distinct advantage over well-tuned gradient descent with momentum. Thus, insofar as the quadratic functions being optimized by CG within HF are of this character, HF shouldn’t in principle be faster than well-tuned SGD with momentum. The extent to which neural network objective functions give rise to such quadratics is unclear, although
Sutskever et al. (2013) provides some preliminary evidence that they do.CG falls victim to this worst-case analysis because it is a first-order method. This motivates us to consider methods which don’t rely on first-order methods like CG as their primary engines of optimization. One such class of methods which have been widely studied are those which work by directly inverting a diagonal, block-diagonal, or low-rank approximation to the curvature matrix (e.g. Becker and LeCun, 1989; Schaul et al., 2013; Zeiler, 2013; Le Roux et al., 2008; Ollivier, 2013). In fact, a diagonal approximation of the Fisher information matrix is used within HF as a preconditioner for CG. However, these methods provide only a limited performance improvement in practice, especially compared to SGD with momentum (see for example Schraudolph et al., 2007; Zeiler, 2013), and many practitioners tend to forgo them in favor of SGD or SGD with momentum.
We know that the curvature associated with neural network objective functions is highly non-diagonal, and that updates which properly respect and account for this non-diagonal curvature, such as those generated by HF, can make much more progress minimizing the objective than the plain gradient or updates computed from diagonal approximations of the curvature (usually HF updates are required to adequately minimize most objectives, compared to the required by methods that use diagonal approximations). Thus, if we had an efficient and direct way to compute the inverse of a high-quality non-diagonal approximation to the curvature matrix (i.e. without relying on first-order methods like CG) this could potentially yield an optimization method whose updates would be large and powerful like HF’s, while being (almost) as cheap to compute as the stochastic gradient.
In this work we develop such a method, which we call Kronecker-factored Approximate Curvature (K-FAC). We show that our method can be much faster in practice than even highly tuned implementations of SGD with momentum on certain standard neural network optimization benchmarks.
The main ingredient in K-FAC is a sophisticated approximation to the Fisher information matrix, which despite being neither diagonal nor low-rank, nor even block-diagonal with small blocks, can be inverted very efficiently, and can be estimated in an online fashion using arbitrarily large subsets of the training data (without increasing the cost of inversion).
This approximation is built in two stages. In the first, the rows and columns of the Fisher are divided into groups, each of which corresponds to all the weights in a given layer, and this gives rise to a block-partitioning of the matrix (where the blocks are much larger than those used by Le Roux et al. (2008) or Ollivier (2013)). These blocks are then approximated as Kronecker products between much smaller matrices, which we show is equivalent to making certain approximating assumptions regarding the statistics of the network’s gradients.
In the second stage, this matrix is further approximated as having an inverse
which is either block-diagonal or block-tridiagonal. We justify this approximation through a careful examination of the relationships between inverse covariances, tree-structured graphical models, and linear regression. Notably, this justification doesn’t apply to the Fisher itself, and our experiments confirm that while the inverse Fisher does indeed possess this structure (approximately), the Fisher itself does not.
The rest of this paper is organized as follows. Section 2 gives basic background and notation for neural networks and the natural gradient. Section 3 describes our initial Kronecker product approximation to the Fisher. Section 4 describes our further block-diagonal and block-tridiagonal approximations of the inverse Fisher, and how these can be used to derive an efficient inversion algorithm. Section 5 describes how we compute online estimates of the quantities required by our inverse Fisher approximation over a large “window” of previously processed mini-batches (which makes K-FAC very different from methods like HF or KSD, which base their estimates of the curvature on a single mini-batch). Section 6 describes how we use our approximate Fisher to obtain a practical and robust optimization algorithm which requires very little manual tuning, through the careful application of various theoretically well-founded “damping” techniques that are standard in the optimization literature. Note that damping techniques compensate both for the local quadratic approximation being implicitly made to the objective, and for our further approximation of the Fisher, and are non-optional for essentially any 2nd-order method like K-FAC to work properly, as is well established by both theory and practice within the optimization literature (Nocedal and Wright, 2006). Section 7 describes a simple and effective way of adding a type of “momentum” to K-FAC, which we have found works very well in practice. Section 8 describes the computational costs associated with K-FAC, and various ways to reduce them to the point where each update is at most only several times more expensive to compute than the stochastic gradient. Section 9 gives complete high-level pseudocode for K-FAC. Section 10 characterizes a broad class of network transformations and reparameterizations to which K-FAC is essentially invariant. Section 11 considers some related prior methods for neural network optimization. Proofs of formal results are located in the appendix.
In this section we will define the basic notation for feed-forward neural networks which we will use throughout this paper. Note that this presentation closely follows the one from
Martens (2014).A neural network transforms its input to an output through a series of
layers, each of which consists of a bank of units/neurons. The units each receive as input a weighted sum of the outputs of units from the previous layer and compute their output via a nonlinear “activation” function. We denote by
the vector of these weighted sums for the -th layer, and by the vector of unit outputs (aka “activities”). The precise computation performed at each layer is given as follows:where is an element-wise nonlinear function, is a weight matrix, and is defined as the vector formed by appending to an additional homogeneous coordinate with value 1. Note that we do not include explicit bias parameters here as these are captured implicitly through our use of homogeneous coordinates. In particular, the last column of each weight matrix
corresponds to what is usually thought of as the “bias vector”. Figure
1 illustrates our definition for .We will define , which is the vector consisting of all of the network’s parameters concatenated together, where is the operator which vectorizes matrices by stacking their columns together.
We let
denote the loss function which measures the disagreement between a prediction
made by the network, and a target . The training objective function is the average (or expectation) of losses with respect to a training distribution over input-target pairs . is a proxy for the objective which we actually care about but don’t have access to, which is the expectation of the loss taken with respect to the true data distribution .We will assume that the loss is given by the negative log probability associated with a simple predictive distribution
for parameterized by , i.e. that we havewhere is ’s density function. This is the case for both the standard least-squares and cross-entropy objective functions, where the predictive distributions are multivariate normal and multinomial, respectively.
We will let denote the conditional distribution defined by the neural network, as parameterized by , and its density function. Note that minimizing the objective function can be seen as maximum likelihood learning of the model .
For convenience we will define the following additional notation:
Algorithm 1 shows how to compute the gradient
of the loss function of a neural network using standard backpropagation.
Because our network defines a conditional model , it has an associated Fisher information matrix (which we will simply call “the Fisher”) which is given by
Here, the expectation is taken with respect to the data distribution over inputs , and the model’s predictive distribution over . Since we usually don’t have access to , and the above expectation would likely be intractable even if we did, we will instead compute using the training distribution over inputs .
The well-known natural gradient (Amari, 1998) is defined as . Motivated from the perspective of information geometry (Amari and Nagaoka, 2000), the natural gradient defines the direction in parameter space which gives the largest change in the objective per unit of change in the model, as measured by the KL-divergence. This is to be contrasted with the standard gradient, which can be defined as the direction in parameter space which gives the largest change in the objective per unit of change in the parameters, as measured by the standard Euclidean metric.
The natural gradient also has links to several classical ideas from optimization. It can be shown (Martens, 2014; Pascanu and Bengio, 2014) that the Fisher is equivalent to the Generalized Gauss-Newton matrix (GGN) (Schraudolph, 2002; Martens and Sutskever, 2012) in certain important cases, which is a well-known positive semi-definite approximation to the Hessian of the objective function. In particular, (Martens, 2014) showed that when the GGN is defined so that the network is linearized up to the loss function, and the loss function corresponds to the negative log probability of observations under an exponential family model with representing the natural parameters, then the Fisher corresponds exactly to the GGN.^{1}^{1}1Note that the condition that represents the natural parameters might require one to formally include the nonlinear transformation usually performed by the final nonlinearity of the network (such as the logistic-sigmoid transform before a cross-entropy error) as part of the loss function instead. Equivalently, one could linearize the network only up to the input to when computing the GGN (see Martens and Sutskever (2012)).
The GGN has served as the curvature matrix of choice in HF and related methods, and so in light of its equivalence to the Fisher, these 2nd-order methods can be seen as approximate natural gradient methods. And perhaps more importantly from a practical perspective, natural gradient-based optimization methods can conversely be viewed as 2nd-order optimization methods, which as pointed out by Martens (2014)), brings to bare the vast wisdom that has accumulated about how to make such methods work well in both theory and practice (e.g Nocedal and Wright, 2006). In Section 6 we productively make use of these connections in order to design a robust and highly effective optimization method using our approximation to the natural gradient/Fisher (which is developed in Sections 3 and 4).
The main computational challenge associated with using the natural gradient is computing (or its product with ). For large networks, with potentially millions of parameters, computing this inverse naively is computationally impractical. In this section we develop an initial approximation of which will be a key ingredient in deriving our efficiently computable approximation to and the natural gradient.
Note that and so can be expressed as
Thus, we see that can be viewed as an by block matrix, with the -th block given by .
Noting that and that we have , and thus we can rewrite as
where denotes the Kronecker product between and , and is given by
Note that the Kronecker product satisfies many convenient properties that we will make use of in this paper, especially the identity . See Van Loan (2000) for a good discussion of the Kronecker product.
Our initial approximation to will be defined by the following block-wise approximation:
(1) |
where and .
This gives
which has the form of what is known as a Khatri-Rao product in multivariate statistics.
, for the middle 4 layers of a standard deep neural network partially trained to classify a 16x16 down-scaled version of MNIST. The network was trained with 7 iterations of K-FAC in batch mode, achieving 5% error (the error reached 0% after 22 iterations) . The network architecture is 256-20-20-20-20-20-10 and uses standard tanh units. On the
left is the exact Fisher , in the middle is our approximation , and on the right is the difference of these. The dashed lines delineate the blocks. Note that for the purposes of visibility we plot the absolute values of the entries, with the white level corresponding linearly to the size of these values (up to some maximum, which is the same in each image).The expectation of a Kronecker product is, in general, not equal to the Kronecker product of expectations, and so this is indeed a major approximation to make, and one which likely won’t become exact under any realistic set of assumptions, or as a limiting case in some kind of asymptotic analysis. Nevertheless, it seems to be fairly accurate in practice, and is able to successfully capture the “coarse structure” of the Fisher, as demonstrated in Figure
2 for an example network.As we will see in later sections, this approximation leads to significant computational savings in terms of storage and inversion, which we will be able to leverage in order to design an efficient algorithm for computing an approximation to the natural gradient.
Consider an arbitrary pair of weights and from the network, where denotes the value of the -th entry. We have that the corresponding derivatives of these weights are given by and , where we denote for convenience , , , and .
The approximation given by eqn. 1 is equivalent to making the following approximation for each pair of weights:
(2) |
And thus one way to interpret the approximation in eqn. 1 is that we are assuming statistical independence between products of unit activities and products of unit input derivatives.
Another more detailed interpretation of the approximation emerges by considering the following expression for the approximation error (which is derived in the appendix):
(3) |
Here
denotes the cumulant of its arguments. Cumulants are a natural generalization of the concept of mean and variance to higher orders, and indeed 1st-order cumulants are means and 2nd-order cumulants are covariances. Intuitively, cumulants of order
measure the degree to which the interaction between variables is intrinsically of order , as opposed to arising from many lower-order interactions.A basic upper bound for the approximation error is
(4) |
which will be small if all of the higher-order cumulants are small (i.e. those of order 3 or higher). Note that in principle this upper bound may be loose due to possible cancellations between the terms in eqn. 3.
Because higher-order cumulants are zero for variables jointly distributed according to a multivariate Gaussian, it follows that this upper bound on the approximation error will be small insofar as the joint distribution over
, , , and is well approximated by a multivariate Gaussian. And while we are not aware of an argument for why this should be the case in practice, it does seem to be the case that for the example network from Figure 2, the size of the error is well predicted by the size of the higher-order cumulants. In particular, the total approximation error, summed over all pairs of weights in the middle 4 layers, is , and is of roughly the same size as the corresponding upper bound (), whose size is tied to that of the higher order cumulants (due to the impossibility of cancellations in eqn. 4).To the best of our knowledge there is no efficient general method for inverting a Khatri-Rao product like . Thus, we must make further approximations if we hope to obtain an efficiently computable approximation of the inverse Fisher.
In the following subsections we argue that the inverse of can be reasonably approximated as having one of two special structures, either of which make it efficiently computable. The second of these will be slightly less restrictive than the first (and hence a better approximation) at the cost of some additional complexity. We will then show how matrix-vector products with these approximate inverses can be efficiently computed, which will thus give an efficient algorithm for computing an approximation to the natural gradient.
Suppose we are given a multivariate distribution whose associated covariance matrix is .
Define the matrix so that for , is the coefficient on the -th variable in the optimal linear predictor of the -th variable from all the other variables, and for , . Then define the matrix to be the diagonal matrix where is the variance of the error associated with such a predictor of the -th variable.
Pourahmadi (2011) showed that and can be obtained from the inverse covariance by the formulas
from which it follows that the inverse covariance matrix can be expressed as
Intuitively, this result says that each row of the inverse covariance is given by the coefficients of the optimal linear predictor of the -th variable from the others, up to a scaling factor. So if the -th variable is much less “useful” than the other variables for predicting the -th variable, we can expect that the -th entry of the inverse covariance will be relatively small.
Note that “usefulness” is a subtle property as we have informally defined it. In particular, it is not equivalent to the degree of correlation between the -th and -th variables, or any such simple measure. As a simple example, consider the case where the -th variable is equal to the -th variable plus independent Gaussian noise. Since any linear predictor can achieve a lower variance simply by shifting weight from the -th variable to the -th variable, we have that the -th variable is not useful (and its coefficient will thus be zero) in the task of predicting the -th variable for any setting of other than or .
Noting that the Fisher is a covariance matrix over w.r.t. the model’s distribution (because by Lemma 4), we can thus apply the above analysis to the distribution over to gain insight into the approximate structure of , and by extension its approximation .
Consider the derivative of the loss with respect to the weights of layer . Intuitively, if we are trying to predict one of the entries of from the other entries of , those entries also in will likely be the most useful in this regard. Thus, it stands to reason that the largest entries of will be those on the diagonal blocks, so that will be well approximated as block-diagonal, with each block corresponding to a different .
Beyond the other entries of , it is the entries of and (i.e. those associated with adjacent layers) that will arguably be the most useful in predicting a given entry of . This is because the true process for computing the loss gradient only uses information from the layer below (during the forward pass) and from the layer above (during the backwards pass). Thus, approximating as block-tridiagonal seems like a reasonable and milder alternative than taking it to be block-diagonal. Indeed, this approximation would be exact if the distribution over were given by a directed graphical model which generated each of the ’s, one layer at a time, from either or . Or equivalently, if were distributed according to an undirected Gaussian graphical model with binary potentials only between entries in the same or adjacent layers. Both of these models are depicted in Figure 4.
Now while in reality the ’s are generated using information from adjacent layers according to a process that is neither linear nor Gaussian, it nonetheless stands to reason that their joint statistics might be reasonably approximated by such a model. In fact, the idea of approximating the distribution over loss gradients with a directed graphical model forms the basis of the recent FANG method of Grosse and Salakhutdinov (2015).
Figure 3 examines the extent to which the inverse Fisher is well approximated as block-diagonal or block-tridiagonal for an example network.
In the following two subsections we show how both the block-diagonal and block-tridiagonal approximations to give rise to computationally efficient methods for computing matrix-vector products with it. And at the end of Section 4 we present two figures (Figures 5 and 6) which examine the quality of these approximations for an example network.
Approximating as block-diagonal is equivalent to approximating as block-diagonal. A natural choice for such an approximation of , is to take the block-diagonal of to be that of . This gives the matrix
Using the identity we can easily compute the inverse of as
Thus, computing amounts to computing the inverses of smaller matrices.
Then to compute , we can make use of the well-known identity to get
where maps to and maps to in an analogous way to how maps to .
Note that block-diagonal approximations to the Fisher information have been proposed before in TONGA (Le Roux et al., 2008), where each block corresponds to the weights associated with a particular unit. In our block-diagonal approximation, the blocks correspond to all the parameters in a given layer, and are thus much larger. In fact, they are so large that they would be impractical to invert as general matrices.
Note that unlike in the above block-diagonal case, approximating as block-tridiagonal is not equivalent to approximating as block-tridiagonal. Thus we require a more sophisticated approach to deal with such an approximation. We develop such an approach in this subsection.
To start, we will define to be the matrix which agrees with on the tridiagonal blocks, and which satisfies the property that is block-tridiagonal. Note that this definition implies certain values for the off-tridiagonal blocks of which will differ from those of insofar as is not actually block-tridiagonal.
To establish that such a matrix is well defined and can be inverted efficiently, we first observe that assuming that is block-tridiagonal is equivalent to assuming that it is the precision matrix of an undirected Gaussian graphical model (UGGM) over (as depicted in Figure 4), whose density function is proportional to . As this graphical model has a tree structure, there is an equivalent directed graphical model with the same distribution and the same (undirected) graphical structure (e.g. Bishop, 2006), where the directionality of the edges is given by a directed acyclic graph (DAG). Moreover, this equivalent directed model will also be linear/Gaussian, and hence a directed Gaussian Graphical model (DGGM).
Next we will show how the parameters of such a DGGM corresponding to can be efficiently recovered from the tridiagonal blocks of , so that is uniquely determined by these blocks (and hence well-defined). We will assume here that the direction of the edges is from the higher layers to the lower ones. Note that a different choice for these directions would yield a superficially different algorithm for computing the inverse of that would nonetheless yield the same output.
For each , we will denote the conditional covariance matrix of on by and the linear coefficients from to by the matrix , so that the conditional distributions defining the model are
Since is just the covariance of , it is given simply by . And for , we can see that is given by
where
The conditional covariance is thus given by
Following the work of Grosse and Salakhutdinov (2015), we use the block generalization of well-known “Cholesky” decomposition of the precision matrix of DGGMs (Pourahmadi, 1999), which gives
where,
Thus, matrix-vector multiplication with amounts to performing matrix-vector multiplication by , followed by , and then by .
As in the block-diagonal case considered in the previous subsection, matrix-vector products with (and ) can be efficiently computed using the well-known identity . In particular, can be computed as
and similarly can be computed as
where the ’s and ’s are defined in terms of and as in the previous subsection.
Multiplying a vector by amounts to multiplying each by the corresponding . This is slightly tricky because is the difference of Kronecker products, so we cannot use the straightforward identity . Fortunately, there are efficient techniques for inverting such matrices which we discuss in detail in Appendix B.
Figures 5 and 6 examine the quality of the approximations and of , which are derived by approximating as block-diagonal and block-tridiagonal (resp.), for an example network.
From Figure 5, which compares and directly to , we can see that while and exactly capture the diagonal and tridiagonal blocks (resp.) of , as they must by definition, ends up approximating the off-tridiagonal blocks of very well too. This is likely owed to the fact that the approximating assumption used to derive , that is block-tridiagonal, is a very reasonable one in practice (judging by Figure 3).
Figure 6, which compares and to , paints an arguably more interesting and relevant picture, as the quality of the approximation of the natural gradient will be roughly proportional^{2}^{2}2The error in any approximation of the natural gradient will be roughly proportional to the error in the approximation of the associated inverse Fisher , since . to the quality of approximation of the inverse Fisher. We can see from this figure that due to the approximate block-diagonal structure of , is actually a reasonably good approximation of , despite being a rather poor approximation of (based on Figure 5). Meanwhile, we can see that by accounting for the tri-diagonal blocks, is indeed a significantly better approximation of than is, even on the diagonal blocks.
Recall that and . Both approximate Fisher inverses discussed in Section 4 require some subset of these. In particular, the block-diagonal approximation requires them for , while the block-tridiagonal approximation requires them for (noting that and ).
Since the ’s don’t depend on , we can take the expectation with respect to just the training distribution over the inputs . On the other hand, the ’s do depend on , and so the expectation^{3}^{3}3It is important to note this expectation should not be taken with respect to the training/data distribution over (i.e. or ). Using the training/data distribution for would perhaps give an approximation to a quantity known as the “empirical Fisher information matrix”, which lacks the previously discussed equivalence to the Generalized Gauss-Newton matrix, and would not be compatible with the theoretical analysis performed in Section 3.1 (in particular, Lemma 4 would break down). Moreover, such a choice would not give rise to what is usually thought of as the natural gradient, and based on the findings of Martens (2010), would likely perform worse in practice as part of an optimization algorithm. See Martens (2014) for a more detailed discussion of the empirical Fisher and reasons why it may be a poor choice for a curvature matrix compared to the standard Fisher. must be taken with respect to both and the network’s predictive distribution .
While computing matrix-vector products with the could be done exactly and efficiently for a given input (or small mini-batch of ’s) by adapting the methods of Schraudolph (2002), there doesn’t seem to be a sufficiently efficient method for computing the entire matrix itself. Indeed, the hardness results of Martens et al. (2012) suggest that this would require, for each example in the mini-batch, work that is asymptotically equivalent to matrix-matrix multiplication involving matrices the same size as . While a small constant number of such multiplications is arguably an acceptable cost (see Section 8), a number which grows with the size of the mini-batch would not be.
Instead, we will approximate the expectation over by a standard Monte-Carlo estimate obtained by sampling ’s from the network’s predictive distribution and then rerunning the backwards phase of backpropagation (see Algorithm 1) as if these were the training targets.
Note that computing/estimating the required /’s involves computing averages over outer products of various ’s from network’s usual forward pass, and ’s from the modified backwards pass (with targets sampled as above). Thus we can compute/estimate these quantities on the same input data used to compute the gradient , at the cost of one or more additional backwards passes, and a few additional outer-product averages. Fortunately, this turns out to be quite inexpensive, as we have found that just one modified backwards pass is sufficient to obtain a good quality estimate in practice, and the required outer-product averages are similar to those already used to compute the gradient in the usual backpropagation algorithm.
In the case of online/stochastic optimization we have found that the best strategy is to maintain running estimates of the required ’s and ’s using a simple exponentially decaying averaging scheme. In particular, we take the new running estimate to be the old one weighted by , plus the estimate on the new mini-batch weighted by , for some . In our experiments we used , where is the iteration number.
Note that the more naive averaging scheme where the estimates from each iteration are given equal weight would be inappropriate here. This is because the ’s and ’s depend on the network’s parameters , and these will slowly change over time as optimization proceeds, so that estimates computed many iterations ago will become stale.
This kind of exponentially decaying averaging scheme is commonly used in methods involving diagonal or block-diagonal approximations (with much smaller blocks than ours) to the curvature matrix (e.g. LeCun et al., 1998; Park et al., 2000; Schaul et al., 2013). Such schemes have the desirable property that they allow the curvature estimate to depend on much more data than can be reasonably processed in a single mini-batch.
Notably, for methods like HF which deal with the exact Fisher indirectly via matrix-vector products, such a scheme would be impossible to implement efficiently, as the exact Fisher matrix (or GGN) seemingly cannot be summarized using a compact data structure whose size is independent of the amount of data used to estimate it. Indeed, it seems that the only representation of the exact Fisher which would be independent of the amount of data used to estimate it would be an explicit matrix (which is far too big to be practical). Because of this, HF and related methods must base their curvature estimates only on subsets of data that can be reasonably processed all at once, which limits their effectiveness in the stochastic optimization regime.
The idealized natural gradient approach is to follow the smooth path^{4}^{4}4Which has the interpretation of being a geodesic in the Riemannian manifold from the current predictive distribution towards the training distribution when using a likelihood or KL-divergence based objective function (see Martens (2014)).
in the Riemannian manifold (implied by the Fisher information matrix viewed as a metric tensor) that is generated by taking a series of infinitesimally small steps (in the original parameter space) in the direction of the natural gradient (which gets recomputed at each point). While this is clearly impractical as a real optimization method, one can take larger steps and still follow these paths approximately. But in our experience, to obtain an update which satisfies the minimal requirement of not worsening the objective function value, it is often the case that one must make the step size so small that the resulting optimization algorithm performs poorly in practice.
The reason that the natural gradient can only be reliably followed a short distance is that it is defined merely as an optimal direction (which trades off improvement in the objective versus change in the predictive distribution), and not a discrete update.
Fortunately, as observed by Martens (2014), the natural gradient can be understood using a more traditional optimization-theoretic perspective which implies how it can be used to generate updates that will be useful over larger distances. In particular, when is an exponential family model with as its natural parameters (as it will be in our experiments), Martens (2014) showed that the Fisher becomes equivalent to the Generalized Gauss-Newton matrix (GGN), which is a positive semi-definite approximation of the Hessian of . Additionally, there is the well-known fact that when is the negative log-likelihood function associated with a given pair (as we are assuming in this work), the Hessian of and the Fisher are closely related in the sense is the expected Hessian of under the training distribution , while is the expected Hessian of under the model’s distribution (defined by the density ).
From these observations it follows that
(5) |
can be viewed as a convex approximation of the 2nd-order Taylor series of expansion of , whose minimizer is the (negative) natural gradient . Note that if we add an or “weight-decay” regularization term to of the form , then similarly can be viewed as an approximation of the Hessian of , and replacing with in yields an approximation of the 2nd-order Taylor series, whose minimizer is a kind of “regularized” (negative) natural gradient , which is what we end up using in practice.
From the interpretation of the natural gradient as the minimizer of , we can see that it fails to be useful as a local update only insofar as fails to be a good local approximation to . And so as argued by Martens (2014), it is natural to make use of the various “damping” techniques that have been developed in the optimization literature for dealing with the breakdowns in local quadratic approximations that inevitably occur during optimization. Notably, this breakdown usually won’t occur in the final “local convergence” stage of optimization where the function becomes well approximated as a convex quadratic within a sufficiently large neighborhood of the local optimum. This is the phase traditionally analyzed in most theoretical results, and while it is important that an optimizer be able to converge well in this final phase, it is arguably much more important from a practical standpoint that it behaves sensibly before this phase.
This initial “exploration phase” (Darken and Moody, 1990) is where damping techniques help in ways that are not apparent from the asymptotic convergence theorems alone, which is not to say there are not strong mathematical arguments that support their use (see Nocedal and Wright, 2006). In particular, in the exploration phase it will often still be true that is accurately approximated by a convex quadratic locally within some region around , and that therefor optimization can be most efficiently performed by minimizing a sequence of such convex quadratic approximations within adaptively sized local regions.
Note that well designed damping techniques, such as the ones we will employ, automatically adapt to the local properties of the function, and effectively “turn themselves off” when the quadratic model becomes a sufficiently accurate local approximation of , allowing the optimizer to achieve the desired asymptotic convergence behavior (Moré, 1978).
Successful and theoretically well-founded damping techniques include Tikhonov damping (aka Tikhonov regularization, which is closely connected to the trust-region method) with Levenberg-Marquardt style adaptation (Moré, 1978), line-searches, and trust regions, truncation, etc., all of which tend to be much more effective in practice than merely applying a learning rate to the update, or adding a fixed multiple of the identity to the curvature matrix. Indeed, a subset of these techniques was exploited in the work of Martens (2010), and primitive versions of them have appeared implicitly in older works such as Becker and LeCun (1989), and also in many recent diagonal methods like that of Zeiler (2013), although often without a good understanding of what they are doing and why they help.
Crucially, more powerful 2nd-order optimizers like HF and K-FAC, which have the capability of taking much larger steps than 1st-order methods (or methods which use diagonal curvature matrices), require more sophisticated damping solutions to work well, and will usually completely fail without them, which is consistent with predictions made in various theoretical analyses (e.g. Nocedal and Wright, 2006). As an analogy one can think of such powerful 2nd-order optimizers as extremely fast racing cars that need more sophisticated control systems than standard cars to prevent them from flying off the road. Arguably one of the reasons why high-powered 2nd-order optimization methods have historically tended to under-perform in machine learning applications, and in neural network training in particular, is that their designers did not understand or take seriously the issue of quadratic model approximation quality, and did not employ the more sophisticated and effective damping techniques that are available to deal with this issue.
For a detailed review and discussion of various damping techniques and their crucial role in practical 2nd-order optimization methods, we refer the reader to Martens and Sutskever (2012).
Methods like HF which use the exact Fisher seem to work reasonably well with an adaptive Tikhonov regularization technique where is added to , and where is adapted according to Levenberg-Marquardt style adjustment rule. This common and well-studied method can be shown to be equivalent to imposing an adaptive spherical region (known as a “trust region”) which constrains the optimization of the quadratic model (e.g Nocedal and Wright, 2006). However, we found that this simple technique is insufficient when used with our approximate natural gradient update proposals. In particular, we have found that there never seems to be a “good” choice for that gives rise to updates which are of a quality comparable to those produced by methods that use the exact Fisher, such as HF.
One possible explanation for this finding is that, unlike quadratic models based on the exact Fisher (or equivalently, the GGN), the one underlying K-FAC has no guarantee of being accurate up to 2nd-order. Thus, must remain large in order to compensate for this intrinsic 2nd-order inaccuracy of the model, which has the side effect of “washing out” the small eigenvalues (which represent important low-curvature directions).
Fortunately, through trial and error, we were able to find a relatively simple and highly effective damping scheme, which combines several different techniques, and which works well within K-FAC. Our scheme works by computing an initial update proposal using a version of the above described adaptive Tikhonov damping/regularization method, and then re-scaling this according to quadratic model computed using the exact Fisher. This second step is made practical by the fact that it only requires a single matrix-vector product with the exact Fisher, and this can be computed efficiently using standard methods. We discuss the details of this scheme in the following subsections.
In the first stage of our damping scheme we generate a candidate update proposal by applying a slightly modified form of Tikhononv damping to our approximate Fisher, before multiplying by its inverse.
In the usual Tikhonov regularization/damping technique, one adds to the curvature matrix (where accounts for the regularization), which is equivalent to adding a term of the form to the corresponding quadratic model (given by with replaced by our approximation). For the block-diagonal approximation of (from Section 4.2) this amounts to adding (for a lower dimensional ) to each of the individual diagonal blocks, which gives modified diagonal blocks of the form
(6) |
Because this is the sum of two Kronecker products we cannot use the simple identity anymore. Fortunately however, there are efficient techniques for inverting such matrices, which we discuss in detail in Appendix B.
If we try to apply this same Tikhonov technique to our more sophisticated approximation of (from Section 4.3) by adding to each of the diagonal blocks of , it is no longer clear how to efficiently invert . Instead, a solution which we have found works very well in practice (and which we also use for the block-diagonal approximation ), is to add and for a scalar constant to the individual Kronecker factors and (resp.) of each diagonal block, giving
(7) |
As this is a single Kronecker product, all of the computations described in Sections 4.2 and 4.3 can still be used here too, simply by replacing each and with their modified versions and .
To see why the expression in eqn. 7 is a reasonable approximation to eqn. 6, note that expanding it gives
which differs from eqn. 6 by the residual error expression
While the choice of is simple and can sometimes work well in practice, a slightly more principled choice can be found by minimizing the obvious upper bound (following from the triangle inequality) on the matrix norm of this residual expression, for some matrix norm . This gives
Evaluating this expression can be done efficiently for various common choices of the matrix norm . For example, for a general we have where is the height/dimension of , and also .
In our experience, one of the best and must robust choices for the norm
Comments
There are no comments yet.