Sparse recovery of elliptic solvers from matrix-vector products

10/11/2021
by   Florian Schäfer, et al.
0

In this work, we show that solvers of elliptic boundary value problems in d dimensions can be approximated to accuracy ϵ from only 𝒪(log(N)log^d(N / ϵ)) matrix-vector products with carefully chosen vectors (right-hand sides). The solver is only accessed as a black box, and the underlying operator may be unknown and of an arbitrarily high order. Our algorithm (1) has complexity 𝒪(Nlog^2(N)log^2d(N / ϵ)) and represents the solution operator as a sparse Cholesky factorization with 𝒪(Nlog(N)log^d(N / ϵ)) nonzero entries, (2) allows for embarrassingly parallel evaluation of the solution operator and the computation of its log-determinant, (3) allows for 𝒪(log(N)log^d(N / ϵ)) complexity computation of individual entries of the matrix representation of the solver that in turn enables its recompression to an 𝒪(Nlog^d(N / ϵ)) complexity representation. As a byproduct, our compression scheme produces a homogenized solution operator with near-optimal approximation accuracy. We include rigorous proofs of these results, and to the best of our knowledge, the proposed algorithm achieves the best trade-off between accuracy ϵ and the number of required matrix-vector products of the original solver.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro