Auto-Differentiating Linear Algebra

10/24/2017
by   Matthias Seeger, et al.
0

Development systems for deep learning, such as Theano, Torch, TensorFlow, or MXNet, are easy-to-use tools for creating complex neural network models. Since gradient computations are automatically baked in, and execution is mapped to high performance hardware, these models can be trained end-to-end on large amounts of data. However, it is currently not easy to implement many basic machine learning primitives in these systems (such as Gaussian processes, least squares estimation, principal components analysis, Kalman smoothing), mainly because they lack efficient support of linear algebra primitives as differentiable operators. We detail how a number of matrix decompositions (Cholesky, LQ, symmetric eigen) can be implemented as differentiable operators. We have implemented these primitives in MXNet, running on CPU and GPU in single and double precision. We sketch use cases of these new operators, learning Gaussian process and Bayesian linear regression models. Our implementation is based on BLAS/LAPACK APIs, for which highly tuned implementations are available on all major CPUs and GPUs.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

11/22/2020

Differentiable Computational Geometry for 2D and 3D machine learning

With the growth of machine learning algorithms with geometry primitives,...
02/26/2019

Banded Matrix Operators for Gaussian Markov Models in the Automatic Differentiation Era

Banded matrices can be used as precision matrices in several models incl...
07/01/2019

GPU-based Parallel Computation Support for Stan

This paper details an extensible OpenCL framework that allows Stan to ut...
02/20/2022

Benchmarking the Linear Algebra Awareness of TensorFlow and PyTorch

Linear algebra operations, which are ubiquitous in machine learning, for...
11/10/2016

DiffSharp: An AD Library for .NET Languages

DiffSharp is an algorithmic differentiation or automatic differentiation...
08/04/2019

GraphBLAST: A High-Performance Linear Algebra-based Graph Framework on the GPU

High-performance implementations of graph algorithms are challenging to ...
01/26/2018

Neural Algebra of Classifiers

The world is fundamentally compositional, so it is natural to think of v...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.