Mixed precision recursive block diagonalization for bivariate functions of matrices

by   Stefano Massei, et al.

Various numerical linear algebra problems can be formulated as evaluating bivariate function of matrices. The most notable examples are the Fréchet derivative along a direction, the evaluation of (univariate) functions of Kronecker-sum-structured matrices and the solution of Sylvester matrix equations. In this work, we propose a recursive block diagonalization algorithm for computing bivariate functions of matrices of small to medium size, for which dense liner algebra is appropriate. The algorithm combines a blocking strategy, as in the Schur-Parlett scheme, and an evaluation procedure for the diagonal blocks. We discuss two implementations of the latter. The first is a natural choice based on Taylor expansions, whereas the second is derivative-free and relies on a multiprecision perturb-and-diagonalize approach. In particular, the appropriate use of multiprecision guarantees backward stability without affecting the efficiency in the generic case. This makes the second approach more robust. The whole method has cubic complexity and it is closely related to the well-known Bartels-Stewart algorithm for Sylvester matrix equations when applied to f(x,y)=1/x+y. We validate the performances of the proposed numerical method on several problems with different conditioning properties.



There are no comments yet.


page 1

page 2

page 3

page 4


Time and space efficient generators for quasiseparable matrices

The class of quasiseparable matrices is defined by the property that any...

Numerical integrating of highly oscillating functions: effective stable algorithms in case of linear phase

A practical and simple stable method for calculating Fourier integrals i...

Structured inversion of the Bernstein-Vandermonde Matrix

Bernstein polynomials, long a staple of approximation theory and computa...

Exponential of tridiagonal Toeplitz matrices: applications and generalization

In this paper, an approximate method is presented for computing exponent...

GP-HMAT: Scalable, O(nlog(n)) Gaussian Process Regression with Hierarchical Low-Rank Matrices

A Gaussian process (GP) is a powerful and widely used regression techniq...

Structured inversion of the Bernstein mass matrix

Bernstein polynomials, long a staple of approximation theory and computa...

Fast Algorithms and Error Analysis of Caputo Derivatives with Small Factional Orders

In this paper, we investigate fast algorithms to approximate the Caputo ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.