A fast and oblivious matrix compression algorithm for Volterra integral operators

03/23/2021 ∙ by Jürgen Dölz, et al. ∙ 0

The numerical solution of dynamical systems with memory requires the efficient evaluation of Volterra integral operators in an evolutionary manner. After appropriate discretisation, the basic problem can be represented as a matrix-vector product with a lower diagonal but densely populated matrix. For typical applications, like fractional diffusion or large scale dynamical systems with delay, the memory cost for storing the matrix approximations and complete history of the data then would become prohibitive for an accurate numerical approximation. For Volterra-integral operators of convolution type, the fast and oblivious convolution quadrature method of Schädle, Lopez-Fernandez, and Lubich allows compute the discretized valuation with N time steps in O(N log N) complexity and only requiring O(log N) active memory to store a compressed version of the complete history of the data. We will show that this algorithm can be interpreted as an ℋ-matrix approximation of the underlying integral operator and, consequently, a further improvement can be achieved, in principle, by resorting to ℋ^2-matrix compression techniques. We formulate a variant of the ℋ^2-matrix vector product for discretized Volterra integral operators that can be performed in an evolutionary and oblivious manner and requires only O(N) operations and O(log N) active memory. In addition to the acceleration, more general asymptotically smooth kernels can be treated and the algorithm does not require a-priori knowledge of the number of time steps. The efficiency of the proposed method is demonstrated by application to some typical test problems.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.