1 Introduction
The Cholesky decomposition of a symmetric positive definite matrix is the unique lower-triangular matrix with positive diagonal elements satisfying . Alternatively, some library routines compute the upper-triangular decomposition . This note compares ways to differentiate the function , and larger expressions containing the Cholesky decomposition (Section 2). We consider compact symbolic results (Section 3) and longer algorithms (Section 4).
Existing computer code that differentiates expressions containing Cholesky decompositions often uses an algorithmic approach proposed by Smith (1995). This approach results from manually applying the ideas behind ‘automatic differentiation’ (e.g. Baydin et al., 2015) to a numerical algorithm for the Cholesky decomposition. Experiments by Walter (2011) suggested that — despite conventional wisdom — computing symbolically-derived results is actually faster. However, these experiments were based on differentiating slow algorithms for the Cholesky decomposition. In this note we introduce ‘blocked’ algorithms for propagating Cholesky derivatives (Section 4), which use cache-friendly and easy-to-parallelize matrix-matrix operations. In our implementations (Appendix A), these are faster than all previously-proposed methods.
2 Computational setup and tasks
This section can be safely skipped by readers familiar with “automatic differentiation”, the notation for “forward-mode sensitivities”, and the notation for “reverse-mode sensitivities” (e.g. Giles, 2008).
We consider a sequence of computations,
(1) |
that starts with an input , computes an intermediate symmetric positive-definite matrix , its lower-triangular Cholesky decomposition , and then a final result . Derivatives of the overall computation
, can be decomposed into reusable parts with the chain rule. However, there are multiple ways to proceed, some much better than others.
Matrix chain rule: It’s tempting to simply write down the chain rule for the overall procedure:
(2) |
where we only sum over the independent elements of symmetric matrix and the occupied lower-triangle of . We can also rewrite the same chain rule in matrix form,
(3) |
where the
operator creates a vector by stacking the lower-triangular columns of a matrix. A derivative
is a matrix or vector, with a row for each element of and a column for each element of , giving a row vector if is a scalar, and a column vector if is a scalar.The set of all partial derivatives , or equivalently the matrix , contains values for the Cholesky decomposition of an matrix. Explicitly computing each of the terms in equations (2) or (3) is inefficient, and simply not practical for large matrices.
We give expressions for these derivatives at the end of Section 3 for completeness, and because they might be useful for analytical study. However, the computational primitives we really need are methods to accumulate the terms in the chain rule moving left (forwards) or right (backwards), without creating enormous matrices. We outline these processes now, adopting the ‘automatic differentiation’ notation used by Giles (2008) and others.
Forwards-mode accumulation: We start by computing a matrix of sensitivities for the first stage of the computation, with elements . If we applied an infinitesimal perturbation to the input , the intermediate matrix would be perturbed by . This change would in turn perturb the output of the Cholesky decomposition by , where . We would like to compute the sensitivities of the Cholesky decomposition, , from the sensitivities of the input matrix and other ‘local’ quantities ( and/or ), without needing to consider where these came from. Finally, we would compute the required result from and , again without reference to downstream computations (the Cholesky decomposition).
The forwards-mode algorithms in this note describe how to compute the reusable function , which propagates the effect of a perturbation forwards through the Cholesky decomposition. The computational cost will have the same scaling with matrix size as the Cholesky decomposition. However, if we want the derivatives with respect to different inputs to the computation, we must perform the whole forwards propagation times, each time accumulating sensitivities with respect to a different input .
Reverse-mode accumulation: We can instead accumulate derivatives by starting at the other end of the computation sequence (1). The effect of perturbing the final stage of the computation is summarized by a matrix with elements . We need to ‘back-propagate’ this summary to compute the sensitivity of the output with respect to the downstream matrix, . In turn, this signal is back-propagated to compute , the target of our computation, equal to in the forwards propagation above.
The reverse-mode algorithms in this note describe how to construct the reusable function , which propagates the effect of a perturbation in the Cholesky decomposition backwards, to compute the effect of perturbing the original positive definite matrix. Like forwards-mode propagation, the computational cost has the same scaling with matrix size as the Cholesky decomposition. Reverse-mode differentiation or ‘back-propagation’ has the advantage that can be reused to compute derivatives with respect to multiple inputs. Indeed if the input to the sequence of computations (1) is a -dimensional vector, the cost to obtain all partial derivatives scales the same as a single forwards computation of . For -dimensional inputs, reverse-mode differentiation scales a factor of times better than forwards-mode.
Reverse-mode computations can have greater memory requirements than forwards mode, and are less appealing than forwards-mode if there are more outputs of the computation than inputs.
3 Symbolic differentiation
It is not immediately obvious whether a small, neat symbolic form should exist for the derivatives of some function of a matrix, or whether the forward- and reverse-mode updates are simple to express. For the Cholesky decomposition, the literature primarily advises using algorithmic update rules, derived from the algorithms for numerically evaluating the original function (Smith, 1995; Giles, 2008). However, there are also fairly small algebraic expressions for the derivatives of the Cholesky decomposition, and for forwards- and reverse-mode updates.
Forwards-mode: Särkkä (2013) provides a short derivation of a forwards propagation rule (his Theorem A.1), which we adapt to the notation used here.
An infinitesimal perturbation to the expression gives:
(4) |
We wish to re-arrange to get an expression for . The trick is to left-multiply by and right-multiply by :
(5) |
The first term on the right-hand side is now lower-triangular. The second term is the transpose of the first, meaning it is upper-triangular and has the same diagonal. We can therefore remove the second term by applying a function to both sides, where takes the lower-triangular part of a matrix and halves its diagonal:
(6) |
Multiplying both sides by gives us the perturbation of the Cholesky decomposition:
(7) |
Substituting the forward-mode sensitivity relationships and (Section 2), immediately gives a forwards-mode update rule, which is easy to implement:
. | (8) |
The input perturbation must be a symmetric matrix, , because is assumed to be symmetric for all inputs .
Reverse-mode: We can also obtain a neat symbolic expression for the reverse mode updates. We substitute (7) into , and with a few lines of manipulation, rearrange it into the form . Brewer (1977)’s Theorem 1 then implies that for a symmetric matrix , the symmetric matrix containing reverse mode sensitivities will be:
(9) |
where is a diagonal matrix containing the diagonal elements of , and function is still as defined in (6).
Alternatively, a lower-triangular matrix containing the independent elements of can be constructed as:
(10) |
Since first writing this section we have discovered two similar reverse-mode expressions (Walter, 2011; Koerber, 2015). It seems likely that other authors have also independently derived equivalent results, although these update rules do not appear to have seen wide-spread use.
Matrix of derivatives: By choosing the input of interest to be , and fixing the other elements of , the sensitivity becomes a matrix of zeros except for ones at . Substituting into (8) gives an expression for all of the partial derivatives of the Cholesky decomposition with respect to any chosen element of the covariance matrix. Some further manipulation, expanding matrix products as sums over indices, gives an explicit expression for any element,
(11) |
If we compute every element, each one can be evaluated in constant time by keeping running totals of the sums in (11) as we decrement from to . Explicitly computing every partial derivative therefore costs .
These derivatives can be arranged into a matrix, by ‘vectorizing’ the expression (Magnus and Neudecker, 2007; Minka, 2000; Harmeling, 2013). We use a well-known identity involving the operator, which stacks the columns of a matrix into a vector, and the Kronecker product :
(12) |
Applying this identity to (7) yields:
(13) |
We can remove the function , by introducing a diagonal matrix defined such that for any matrix . Applying (12) again gives:
(14) |
Using the standard elimination matrix , and duplication matrix (Magnus and Neudecker, 1980), we can convert between the and of a matrix, where is a vector made by stacking the columns of the lower triangle of .
(15) |
This compact-looking result was stated on MathOverflow^{1}^{1}1http://mathoverflow.net/questions/150427/the-derivative-of-the-cholesky-factor#comment450752_167719 — comment from 2014-09-01 by pseudonymous user ‘pete’. It may be useful for further analytical study, but doesn’t immediately help with scalable computation.
4 Differentiating Cholesky algorithms
We have seen that it is inefficient to compute each term in the chain rule, (2) or (3), applied to a high-level matrix computation. For Cholesky derivatives the cost is , compared to for the forward- or reverse-mode updates in (8), (9), or (10). However, evaluating the terms of the chain rule applied to any low-level computation — expressed as a series of elementary scalar operations — gives derivatives with the same computational complexity as the original function (e.g. Baydin et al., 2015). Therefore algorithms for the dense Cholesky decomposition can be mechanically converted into forward- and reverse-mode update algorithms, which is called ‘automatic differentiation’.
Smith (1995) proposed taking this automatic differentiation approach, although presented hand-derived propagation algorithms that could be easily implemented in any programming environment. Smith also reported applications to sparse matrices, where automatic differentiation inherits the improved complexity of computing the Cholesky decomposition. However, the algorithms that were considered for dense matrices aren’t cache-friendly or easy to parallelize, and will be slow in practice.
Currently-popular numerical packages such as NumPy, Octave, and R (Oliphant, 2006; Eaton et al., 2009; R Core Team, 2012) compute the Cholesky decomposition using the LAPACK library (Anderson et al., 1999). LAPACK implements block algorithms that express computations as cache-friendly, parallelizable ‘Level 3 BLAS’ matrix-matrix operations that are fast on modern architectures. Dongarra et al. (1990) described the Level 3 BLAS operations, including an example block implementation of a Cholesky decomposition. For large matrices, we have sometimes found LAPACK’s routine to be faster than a C or Fortran implementation of the Cholesky algorithm considered by Smith (1995). Precise timings are machine-dependent, however it’s clear that any large dense matrix computations, including derivative computations, should be implemented using blocked algorithms where possible^{2}^{2}2Historical note: It’s entirely reasonable that Smith (1995) did not use blocked algorithms. Primarily, Smith’s applications used sparse computations. In any case, blocked algorithms weren’t universally adopted until later. For example, Matlab didn’t incorporate LAPACK until 2000, http://www.mathworks.com/company/newsletters/articles/matlab-incorporates-lapack.html..
Block routines, like those in LAPACK, ultimately come down to elementary scalar operations inside calls to BLAS routines. In principle, automatic differentiation tools could be applied. However, the source code and compilation tools for the optimized BLAS routines for a particular machine are not always available to users. Even if they were, automatic differentiation tools would not necessarily create cache-friendly algorithms. For these reasons Walter (2011) used symbolic approaches (Section 3) to provide update rules based on standard matrix-matrix operations.
An alternative approach is to extend the set of elementary routines understood by an automatic differentiation procedure to the operations supported by BLAS. We could then pass derivatives through the Cholesky routine implemented by LAPACK, treating the best available matrix-matrix routines as black-box functions. Giles (2008) provides an excellent tutorial on deriving forward- and reverse-mode update rules for elementary matrix operations, which we found invaluable for deriving the algorithms that follow^{3}^{3}3Ironically, Giles (2008) also considered differentiating the Cholesky decomposition but, like Smith (1995), gave slow scalar-based algorithms.. While his results can largely be found in materials already mentioned (Magnus and Neudecker, 2007; Minka, 2000; Harmeling, 2013), Giles emphasised forwards- and reverse-mode update rules, rather than huge objects like (15).
In the end, we didn’t follow an automatic differentiation procedure exactly. While we derived derivative propagation rules from the structure of the Cholesky algorithms (unlike Section 3), we still symbolically manipulated some of the results to make the updates neater and in-place. In principle, a sophisticated optimizing compiler for automatic differentiation could do the same.
4.1 Level 2 routines
LAPACK also provides ‘unblocked’ routines, which use ‘Level 2’ BLAS operations (Dongarra et al., 1988a, b) like matrix-vector products. Although a step up from scalar-based algorithms, these are intended for small matrices only, and as helpers for ‘Level 3’ blocked routines (Section 4.2).
The LAPACK routine DPOTF2 loops over columns of an input matrix , replacing the lower-triangular part in-place with its Cholesky decomposition. At each iteration, the algorithm uses a row vector , a diagonal element , a matrix , and a column vector as follows:
function level2partition(, )
return , , ,
Here ‘=’ creates a view into the matrix , meaning that in the algorithm below, ‘’ assigns results into the corresponding part of matrix .
function chol_unblocked()
# If at input , at output .
for to :
, , , level2partition(, )
return
The algorithm only inspects and updates the lower-triangular part of the matrix. If the upper-triangular part did not start out filled with zeros, then the user will need to zero out the upper triangle of the final array with the function:
(16) |
In each iteration, and are parts of the Cholesky decomposition that have already been computed, and and are updated in place, from their original settings in to give another column of the Cholesky decomposition. The matrix-vector multiplication is a Level 2 BLAS operation. These multiplications are the main computational cost of this algorithm.
Forwards-mode differentiation:
The in-place updates obscure the relationships between parts of the input matrix and its Cholesky decomposition. We could rewrite the updates more explicitly as
(17) | ||||
(18) |
Applying infinitesimal perturbations to these equations gives
(19) | ||||
(20) |
We then get update rules for the forward-mode sensitivities by substituting their relationships, and (Section 2), into the equations above. Mirroring the original algorithm, we can thus convert to in-place, with the algorithm below:
function chol_unblocked_fwd(, )
# If at input , at output , where .
for to :
, , , level2partition(, )
, , , level2partition(, )
return
Alternatively, the Cholesky decomposition and its forward sensitivity can be accumulated in one loop, by placing the updates from this algorithm after the corresponding lines in chol_unblocked.
Reverse-mode differentiation:
Reverse mode automatic differentiation traverses an algorithm backwards, reversing the direction of loops and the updates within them. At each step, the effect of perturbing an output is ‘back-propagated’ to compute the effects of perturbing the inputs to that step. If the effects of the perturbations are consistent then
(21) |
and we can find by comparing coefficients in this equation. If a quantity is an input to multiple computations (, , , …), then we accumulate its total sensitivity,
(22) |
summarizing the quantity’s effect on the final computation, (as reviewed in Section 2).
Using the standard identities , , and , the perturbations from the final line of the Cholesky algorithm (20) imply:
(23) |
We thus read off that , where the sensitivities include the direct effect on , provided by the user of the routine, and the knock-on effects that changing this column would have on the columns computed to the right. These knock-on effects should have been accumulated through previous iterations of the reverse propagation algorithm. From this equation, we can also identify the knock-on effects that changing , , and would have through changing column , which should be added on to their existing sensitivities for later.
The perturbation (19) to the other update in the Cholesky algorithm implies:
(24) |
Comparing coefficients again, we obtain another output of the reverse-mode algorithm, . We also add to the running total for the sensitivity of for later updates.
The algorithm below tracks all of these sensitivities, with the updates rearranged to simplify some expressions and to make an algorithm that can update the sensitivities in-place.
function chol_unblocked_rev(, )
# If at input , at output , where .
for to , in steps of :
, , , level2partition(, )
, , , level2partition(, )
return
4.2 Level 3 routines
The LAPACK routine DPOTRF also updates the lower-triangular part of an array in place with its Cholesky decomposition. However, this routine updates blocks at a time, rather than single column vectors, using the following partitions:
function level3partition(, , )
return , , ,
Only the lower-triangular part of , the matrix on the diagonal, is referenced. The algorithm below loops over each diagonal block , updating it and the matrix below it. Each diagonal block (except possibly the last) is of size . The optimal block-size depends on the size of the matrix , and the machine running the code. Implementations of LAPACK select the block-size with a routine called ILAENV.
function chol_blocked(, )
# If at input , at output , for integer .
for to at most in steps of :
, , , level3partition(, , )
return
The computational cost of the blocked algorithm is dominated by Level 3 BLAS operations for the matrix-matrix multiplies and for solving a triangular system. The unblocked Level 2 routine from Section 4.1 (DPOTF2 in LAPACK) is also called as a subroutine on a small triangular block. For large matrices it may be worth replacing this unblocked routine with one that performs more Level 3 operations (Gustavson et al., 2013).
Forwards-mode differentiation:
Following the same strategy as for the unblocked case, we obtained the algorithm below. As before, the input sensitivities can be updated in-place to give , the sensitivities of the resulting Cholesky decomposition. Again, these updates could be accumulated at the same time as computing the original Cholesky decomposition.
function chol_blocked_fwd(, )
# If at input , at output , where .
for to at most in steps of :
, , , level3partition(, , )
, , , level3partition(, , )
return
The unblocked derivative routine is called as a subroutine. Alternatively, chol_blocked_fwd could call itself recursively with a smaller block size, we could use the symbolic result (8), or we could differentiate other algorithms (e.g. Gustavson et al., 2013).
Minor detail: The standard BLAS operations don’t provide a routine to neatly perform the first update for the lower-triangular . One option is to wastefully subtract the full matrix , then zero out the upper-triangle of , meaning that the upper triangle of can’t be used for auxiliary storage.
Reverse-mode differentiation:
Again, deriving the reverse-mode algorithm and arranging it into a convenient form was more involved. The strategy is the same as the unblocked case however, and still relatively mechanical.
function chol_blocked_rev(, )
# If at input , at output , where .
for to no less than in steps of :
, , , level3partition(, , )
, , , level3partition(, , )
return
The partitioning into columns is arbitrary, so the reverse-mode algorithm doesn’t need to select the same set of blocks as the forwards computation. Here, when the matrix size isn’t a multiple of the block-size , we’ve put the smaller blocks at the other edge of the matrix.
As in the blocked forwards-mode update, there is a call to the unblocked routine, which can be replaced with alternative algorithms. In the implementation provided (Appendix A) we use the symbolically-derived update (10).
5 Discussion and Future Directions
The matrix operations required by the Cholesky algorithms implemented in LAPACK can be implemented with straightforward calls to BLAS. However, the forwards- and reverse-mode updates we have derived from these algorithms give some expressions where only the triangular part of a matrix product is required. There aren’t standard BLAS routines that implement exactly what is required, and our implementations must perform unnecessary computations to exploit the fast libraries available. In future, it would be desirable to have standard fast matrix libraries that offer a set of routines that are closed under the rules for deriving derivative updates.
The automatic differentiation tools that have proved popular in machine learning differentiate high-level array-based code. As a result, these tools don’t have access to the source code of the Cholesky decomposition, and need to be told how to differentiate it. Theano
(Bastien et al., 2012; Bergstra et al., 2010), the first tool to be widely-adopted in machine learning, and AutoGrad (Maclaurin et al., 2015) use the algorithm by Smith (1995)(Abadi et al., 2015) in its first release can’t differentiate expressions containing a Cholesky decomposition, but a fork (Hensman and de G. Matthews, 2016) also uses the algorithm by Smith (1995), as previously implemented by The GPy authors (2015).The approaches in this note will be an order of magnitude faster for large matrices than the codes that are in current wide-spread use. Some illustrative timings are given at the end of the code listing (Appendix A). As the algorithms are only a few lines long, they could be ported to a variety of settings without introducing any large dependencies. The simple symbolic expressions (Section 3) could be differentiated using most existing matrix-based tools. Currently AutoGrad can’t repeatedly differentiate the Cholesky decomposition because of the in-place updates in the (Smith, 1995) algorithm.
The ‘Level 3’ blocked algorithms (Section 4.2) are the fastest forwards- and reverse-mode update rules for large matrices. However, these require helper routines to perform the updates on small triangular blocks. In high-level languages (Matlab, Octave, Python), the ‘Level 2’ routines — similar to the algorithms that automatic differentiation would provide — are slow, and we recommend using the symbolic updates (Section 3) for the small matrices instead.
It should be relatively easy to provide similar derivative routines for many standard matrix functions, starting with the rest of the routines in LAPACK. However, it would save a lot of work to have automatic tools to help make these routines. Although there are a wide-variety of tools for automatic differentiation, we are unaware of practical tools that can currently create algorithms as neat and accessible as those made by hand for this note.
References
- Abadi et al. (2015) M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, R. Jozefowicz, Y. Jia, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, M. Schuster, R. Monga, S. Moore, D. Murray, C. Olah, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, and X. Zheng. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. White paper, Google Research, 2015. Software available from http://tensorflow.org. TensorFlow is a trademark of Google Inc.
- Anderson et al. (1999) E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen. LAPACK Users’ guide, volume 9. SIAM, 1999. http://www.netlib.org/lapack/.
- Bastien et al. (2012) F. Bastien, P. Lamblin, R. Pascanu, J. Bergstra, I. J. Goodfellow, A. Bergeron, N. Bouchard, and Y. Bengio. Theano: new features and speed improvements. Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop, 2012.
- Baydin et al. (2015) A. G. Baydin, B. A. Pearlmutter, A. A. Radul, and J. M. Siskind. Automatic differentiation in machine learning: a survey, 2015.
- Bergstra et al. (2010) J. Bergstra, O. Breuleux, F. Bastien, P. Lamblin, R. Pascanu, G. Desjardins, J. Turian, D. Warde-Farley, and Y. Bengio. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), 2010.
- Brewer (1977) J. W. Brewer. The gradient with respect to a symmetric matrix. IEEE Transactions on Automatic Control, 22(2):265–267, 1977.
- Dongarra et al. (1988a) J. J. Dongarra, J. Ducroz, S. Hammarling, and R. Hanson. An extended set of fortran basic linear algebra subprograms. ACM Transactions on Mathematical Software, 14(1):1–17, 1988a.
- Dongarra et al. (1988b) J. J. Dongarra, J. Ducroz, S. Hammarling, and R. Hanson. Algorithm 656: An extended set of fortran basic linear algebra subprograms: Model implementation and test programs. ACM Transactions on Mathematical Software, 16(1):1–17, 1988b.
- Dongarra et al. (1990) J. J. Dongarra, J. Du Croz, S. Hammarling, and I. S. Duff. A set of level 3 basic linear algebra subprograms. ACM Transactions on Mathematical Software, 16(1):1–17, 1990.
- Eaton et al. (2009) J. W. Eaton, D. Bateman, and S. Hauberg. GNU Octave version 3.0.1 manual: a high-level interactive language for numerical computations. CreateSpace Independent Publishing Platform, 2009. URL http://www.gnu.org/software/octave/doc/interpreter. ISBN 1441413006.
- Giles (2008) M. B. Giles. An extended collection of matrix derivative results for forward and reverse mode automatic differentiation, 2008.
- Gustavson et al. (2013) F. G. Gustavson, J. Waśniewski, J. J. Dongarra, J. R. Herrero, and J. Langou. Level-3 Cholesky factorization routines improve performance of many Cholesky algorithms. ACM Transactions on Mathematical Software, 39(2):9:1–9:10, 2013.
- Harmeling (2013) S. Harmeling. Matrix differential calculus cheat sheet. Technical Report Blue Note 142, Max Planck Institute for Intelligent Systems, 2013. http://people.tuebingen.mpg.de/harmeling/bn142.pdf.
- Hensman and de G. Matthews (2016) J. Hensman and A. G. de G. Matthews. GPFlow, 2016. As of February 2016, https://github.com/GPflow/GPflow.
- Koerber (2015) P. Koerber. Adjoint algorithmic differentiation and the derivative of the Cholesky decomposition, 2015. Preprint, available at SSRN: http://dx.doi.org/10.2139/ssrn.2703893.
- Maclaurin et al. (2015) D. Maclaurin, D. Duvenaud, M. Johnson, and R. P. Adams. Autograd: Reverse-mode differentiation of native Python, 2015. Version 1.1.3, http://github.com/HIPS/autograd and https://pypi.python.org/pypi/autograd/.
- Magnus and Neudecker (1980) J. R. Magnus and H. Neudecker. The elimination matrix: some lemmas and applications. SIAM Journal on Algebraic and Discrete Methods, 1(4):422–449, 1980.
- Magnus and Neudecker (2007) J. R. Magnus and H. Neudecker. Matrix differential calculus with application in statistics and econometrics. 3rd edition, 2007. Available from http://www.janmagnus.nl/misc/mdc2007-3rdedition and older editions from Wiley.
- Minka (2000) T. Minka. Old and new matrix algebra useful for statistics, 2000. MIT Media Lab note (1997; revised 12/00), http://research.microsoft.com/en-us/um/people/minka/papers/matrix/.
- Oliphant (2006) T. E. Oliphant. Guide to NumPy. Provo, UT, 2006.
- R Core Team (2012) R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2012. URL http://www.R-project.org/. ISBN 3-900051-07-0.
- Särkkä (2013) S. Särkkä. Bayesian filtering and smoothing. Cambridge University Press., 2013.
- Smith (1995) S. P. Smith. Differentiation of the Cholesky algorithm. Journal of Computational and Graphical Statistics, 4(2):134–147, 1995.
- The GPy authors (2015) The GPy authors. GPy: A Gaussian process framework in Python, 2015. Version 0.8.8, http://github.com/SheffieldML/GPy.
- Walter (2011) S. F. Walter. Structured higher-order algorithmic differentiation in the forward and reverse mode with application in optimum experimental design. PhD thesis, Humboldt-Universität zu Berlin, 2011.
Appendix A Illustrative Python code
The rest of the equations and algorithms in this note are illustrated below using Python code that closely follows the equations and pseudo-code. There are differences due to the note using Matlab/Fortran-style ranges, which are one-based and inclusive, e.g. . In contrast, Python uses zero-based, half-open ranges, e.g. . The code is also available as pseudocode_port.py in the source tar-ball for this paper, available from arXiv.
Development of alternative implementations in multiple programming languages is on-going. At the time of writing, Fortran code with Matlab/Octave and Python bindings, and pure Matlab code is available at https://github.com/imurray/chol-rev. The Fortran code is mainly useful for smaller matrices, as for large matrices, the time spent inside BLAS
routines dominates, regardless of the language used. The code repository also contains a demonstration of pushing derivatives through a whole computation (the log-likelihood of the hyperparameters of a Gaussian process).
Comments
There are no comments yet.