Selection of Sparse Sets of Influence for Meshless Finite Difference Methods

08/05/2019 ∙ by Oleg Davydov, et al. ∙ Universität Gießen 0

We suggest an efficient algorithm for the selection of sparse subsets of a set of influence for the numerical discretization of differential operators on irregular nodes with polynomial consistency of a given order with the help of the QR decomposition of an appropriately weighted polynomial collocation matrix, and prove that the accuracy of the resulting numerical differentiation formulas is comparable with that of the formulas generated on the original set of influence.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Meshless finite difference methods discretize a boundary value problem

(1)

with the help of numerical differentiation formulas of the type

(2)

on an irregular set of nodes , where is a linear differential operator

(3)

with variable coefficients and order . Usually, one or more such operators are associated with a given problem (1) by linearizing and and extracting their parts of different character, such as the diffusion or convection term, see for example Both and and its set of influence belong to a finite set of (unconnected) nodes that discretize the whole domain , and a discrete solution is sought as an approximation to .

For example, the Dirichlet problem for the Poisson equation ( and ) can be discretized by numerical differentiation of the Laplacian

and obtained by solving the linear system

where whenever .

Similar to the classical finite difference method, the error of the formula (2) plays the role of the consistency or the local discretization error. This error may be reduced by choosing larger sets of influence and a higher order numerical differentiation method, giving rise to a higher convergence order of the numerical solution . It is however important to avoid unnecessarily large sets of influence that do not significantly reduce the consistency error of (2). Smaller sets of influence lead to sparser linear systems to be solved for . For example, sets of influence consisting of just 7 points are generated by the algorithms suggested in [3, 9] for elliptic problems, which helps to produce adaptive methless methods that compete with the piecewise linear finite elements in terms of both accuracy and sparsity of the system matrix. However, the algorithms of [3, 9] are geometric in nature and therefore seem difficult to extend to higher order methods.

Most work on meshless finite difference methods relies on selecting the sets of influence in a very simple way by forming from an ad hoc number of nearest neighbors of , see e.g. [1]. This approach works well and produces relatively small sets of influence when the global node set is carefully generated (node generation: the counterpart of mesh generation in mesh based methods). Several node generation methods have been developed. On the other hand, one of the main goals of meshless methods is to avoid sophisticated mesh generation. It is therefore desirable to develop approaches that let meshless finite differences perform well also on nodes generated by simple methods allowing local irregularities that would lead to a severe diteriation in the performance of mesh-based methods.

When node generation is inexpensive and the set is suboptimal, then it is usually possible to obtain acceptable consistency error by selecting larger sets of influence in the locations affected by irregularities. However, if the sets of influence are controlled by a single number of nearest neighbors, then the method unneccesarily uses too many nodes in locations where the neighborhood is more regular than in the worst locations, which leads to unneccesary increased density of the system matrix . In this case the number of nearest neighbors that guarantee good numerical differentiation error may be too high, and selection of sutable small subsets particularly important.

In this paper we discuss how to reduce the size of a set of influence while keeping essentially the same consistency error achieved on the original set. In particular, we suggest a new efficient method for the calculation of sparse weights based on pivoted QR factorization of the polynomial collocation matrices.

2 Consistency error estimates

It has been shown in [4, 5] that the error of the kernel-based formulas (2) as well as certain (minimal) polynomial type formulas can be bounded by the growth function

times a factor depending on the smoothness of and independent of the geometry of the set of influence . Here denotes the space of all -variate polynomials of order at most , i.e. of total degree less than , with .

In particular, in the polynomial case a duality theorem shows that is the minimum of

(4)

subject to the exactness condition

(5)

On the other hand, the following error bound holds for any formula (2) satisfying (5) with , and all with Lipschitz continuous derivatives of order ,

(6)

where is any domain that contains the set

(7)

and

(8)

It follows that for the -minimal formula of order

whose weight vector

is computed by minimizing subject to (5),

(9)

which is the best bound obtainable from (6).

For the -minimal formula with weights , , obtained by minimizing

(10)

subject to (5), the error bound of [5] is worse only by the factor ,

(11)

Moreover, the growth function concides with

, as mentioned before, and it can be estimated with the help of

:

(12)

Note that -minimal formulas can be interpreted as obtained by differentiating a least squares polynomial of order to the data at , with weights , see [5, Section 5], which has been frequently used in meshless finite difference methods, albeit usually with different weights that do not satisfy the error bound (11). The -minimal formulas have been considered in [10, 2] for the Laplacian operator , with an additional requirement of positivity that ensures the -matrix property of the system matrix for the Poisson problem, but may only be satisfied for , see also [5, Section 4].

3 Sparse subsets of sets of influence

Since the growth function is monotone decreasing with respect to , that is

(13)

the estimates (9) and (11) and their kernel-based counterparts in [4] generally improve when larger sets of neighbors are used.

A simple way to produce a set of influence is by selecting a certain number of nearest neighbors of in , where the size is a sufficiently large number choosen on the basis of experience depending on the number of variables and expected convergence order. For numerical differentiation that involves polynomials or order , is typically choosen to be at least the double of

since relying on a number of nearest neighbors less than or only slightly exceeding risks low consistency order or numerical instability even for geometrically nicely distributed node sets. If node generation is performed by less sophisticated algorithms, then the number of nearest neighbors needed to guarantee a good consistency error may even be significanly higher than .

Since the density of the system matrix is determined by the sizes of the sets of influence, it is natural to try to reduce these sizes whenever possible if this does not cause a significant increase of the consistency error. In view of the role of the growth function as an indicator of the consistency error obtainable on a given infuence set, we consider the problem of finding a significantly smaller subset of a given set of influence such that

(14)

for some small constant .

In fact, -minimal formulas already generate a subset of size with as soon as . Indeed, many weights vanish when (4) is minimized suject to (5

) because this optimization problem can be interpeted as a linear program, see

[5, Section 4] for more details. The papers [10, 2] have applied so obtained sparse positive formulas and subsets in the meshless finite difference method.

Moreover, -minimal formulas are often of even smaller size than if the set allows this, for example if , , or 4, , and contains a sufficiently localized 5-star subset centered at . In this case classical 5-point stencil of the finite difference method is automatically recovered.

Unfortunately, -minimal formulas are relatively expensive to compute and numerical methods for them are not always reliable. Therefore we suggest an alternative, significantly more efficient method of selecting a sparse subset satisfying (14) with a constant estimated in Theorem 1.

We assume without loss of generality that

which still allows . Let be a basis for ,

with

(15)

In particular, after an appropriate translation and scaling of the coordinate system of the monomial basis

may be used since the growth function is scale invariant [5, Section 2], see also a discussion of the scalability of numerical diffrentiation formulas in [5, 6].

The exactness condition (5) is equivalent to the system of linear equations

(16)

which is consistent if and only if [5, Theorem 9]. Typically (but not necessarily) . An efficient method for computing a sparse solution of a consistent linear system is to employ the QR factorization of with column pivoting, see [7, Section 12.2.1]. It produces with at most

nonzero components and has been successfully applied to the multivariate Vandermonde matrices in order to select good points for polynomial interpolation on domains

[11]. Applied to directly, this method however does not seem to produce useful sets of influence for mesless finite difference methods.

We suggest to apply a pivoted QR factorization after rescaling the system (16) with the help of the diagonal matrix

We assume that such that (16) has at least one solution. If , then we transform the linear system in the form

and compute a QR factorization of with column pivoting,

where is a permutation matrix,

an orthogonal matrix and

is upper triangular and nonsingular, with [7, Section 5.4.1]. Let be the largest index of the nonzero components of , such that

(17)

Since (16) is consistent, the last components of must be zero, hence . We rewrite in the form

(18)

with an upper triangular and nonsingular matrix . Then the equation is equivalent to

(19)

which has a sparse solution with at most nonzero components determined by the conditions

(20)

Then the vector

also has at most nonzero components and satisfies (16). We denote by the subset of that corresponds to the nonzero components of the vector .

In the case we have assumed (15), in particular . Hence by (3) . We replace by the equivalent equations

where in this case

and

Note that thanks to (15). After computing a pivoted QR factorization of in the form (18), and a solution of (19) satisfying (20), we obtain , and the vector

satisfies (16) and has at most nonzero components, where is the largest index of nonzero components of . The subset of that corresponds to the nonzero components of the vector is again denoted . Note that in this case .

The rank of the matrix is independent of the choice of the basis of , and we denote it The following theorem provides a bound of the type (14) for the subset .

Theorem 1.

Assume that . Let be constructed with the help of a pivoted QR factorization as described above, and let denote the corresponding weght vector. Then

(21)
(22)

where the matrices are defined by (18), and is the number of nonzero components of .

Proof.

We first consider the case when . Let be the -minimal weight vector that minimizes (10) subject to (16). Then is the minimal 2-norm solution of since , and is the minimal 2-norm solution of (19). We write any solution of (19) in the form

The condition is equivalent to for a suitable , where the columns of form an orthonormal basis for the null space of , and is the dimension of . In particular, . Hence (19) is equivalent to

Since , we have . Therefore is the minimal 2-norm solution of the equation in the last display, and by [7, Section 5.5.6], we obtain

Since , and , (21) follows. Since the components of corresponding to form the only solution of , where

it follows by (12) that

and (22) follows from (21).

Let now . By the same arguments we obtain the estimate

where for the minimal 2-norm solution of . Since , we have . Moreover, in view of (15),

Hence, (16) is equivalent to

which implies that the -minimal weight vector that minimizes

subject to (16) is given by

since minimizes

This implies , and (21) folows. The bound (22) is inferred from (21) by the same argument as before. ∎

References

  • [1] V. Bayona, N. Flyer, B. Fornberg, and G. A. Barnett. On the role of polynomials in RBF-FD approximations: II. Numerical solution of elliptic PDEs. Journal of Computational Physics, 332:257 – 273, 2017.
  • [2] V. Bayona, M. Moscoso, and M. Kindelan. Optimal constant shape parameter for multiquadric based RBF-FD method. J. Comput. Phys., 230(19):7384–7399, 2011.
  • [3] O. Davydov and D. T. Oanh. Adaptive meshless centres and RBF stencils for Poisson equation. J. Comput. Phys., 230:287–304, 2011.
  • [4] O. Davydov and R. Schaback. Error bounds for kernel-based numerical differentiation. Numerische Mathematik, 132(2):243–269, 2016.
  • [5] O. Davydov and R. Schaback. Minimal numerical differentiation formulas. Numerische Mathematik, 140(3):555–592, 2018.
  • [6] O. Davydov and R. Schaback. Optimal stencils in Sobolev spaces. IMA Journal of Numerical Analysis, 39(1):398–422, 2019.
  • [7] G. H. Golub and C. F. Van Loan. Matrix Computations. The Johns Hopkins University Press, third edition, 1996.
  • [8] M. Gu and S. Eisenstat. Efficient algorithms for computing a strong rank-revealing QR factorization. SIAM Journal on Scientific Computing, 17(4):848–869, 1996.
  • [9] D. T. Oanh, O. Davydov, and H. X. Phu. Adaptive RBF-FD method for elliptic problems with point singularities in 2D. Applied Mathematics and Computation, 313:474–497, 2017.
  • [10] B. Seibold. Minimal positive stencils in meshfree finite difference methods for the Poisson equation. Comput. Methods Appl. Mech. Eng., 198(3-4):592–601, 2008.
  • [11] A. Sommariva and M. Vianello. Computing approximate Fekete points by QR factorizations of Vandermonde matrices. Computers & Mathematics with Applications, 57(8):1324 – 1336, 2009.