Effective grading refinement for locally linearly independent LR B-splines

10/02/2021
by   Francesco Patrizi, et al.
Max Planck Society
0

We present a new refinement strategy for locally refined B-splines which ensures the local linear independence of the basis functions, the spanning of the full spline space on the underlying locally refined mesh and nice grading properties which grant the preservation of shape regularity and local quasi uniformity of the elements in the refining process.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 7

page 8

page 10

01/30/2020

Adaptive refinement with locally linearly independent LR B-splines: Theory and applications

In this paper we describe an adaptive refinement strategy for LR B-splin...
06/10/2019

Weighted Quasi Interpolant Spline Approximation of 3D point clouds via local refinement

We present a new surface approximation, the Weighted Quasi Interpolant S...
12/09/2021

Multivariate analysis-suitable T-splines of arbitrary degree

This paper defines analysis-suitable T-splines for arbitrary degree (inc...
01/06/2021

Locally supported, quasi-interpolatory bases on graphs

Lagrange functions are localized bases that have many applications in si...
09/01/2021

Adaptive Refinement for Unstructured T-Splines with Linear Complexity

We present an adaptive refinement algorithm for T-splines on unstructure...
12/15/2020

Scattered data approximation by LR B-spline surfaces. A study on refinement strategies for efficient approximation

Locally refined spline surfaces (LRB) is a representation well suited fo...
04/29/2020

Hedging and machine learning driven crude oil data analysis using a refined Barndorff-Nielsen and Shephard model

In this paper, a refined Barndorff-Nielsen and Shephard (BN-S) model is ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Locally Refined (LR) B-splines have been introduced in tor

as generalization of the tensor product B-splines to achieve adaptivity in the discretization process. By allowing local insertions in the underlying mesh, the approximation efficiency is dramatically improved as one avoids the wasting of degrees of freedom by increasing the number of basis functions only where rapid and large variations occur in the analyzed object. Nevertheless, the adoption of LR B-splines for simulation purposes in the Isogeometric Analysis (IgA) framework

iga is hindered by the risk of linear dependence relations lindep. Although a complete characterization of linear independence is still not available, the local linear independence of the basis functions is guaranteed when the underlying Locally Refined (LR) mesh has the so-called Non-Nested-Support (NS) property bressan1; bressan2

. The local linear independence not only avoids the hurdles of dealing with singular linear systems, but it also improves the sparsity of the matrices when assembling the numerical solution. Furthermore, it allows the construction of efficient quasi-interpolation schemes

N2S2. Such a strong property of the basis functions is a rarity, or at least it is quite onerous to gain, among the technologies used for adaptive IgA. For instance, it is not available for (truncated) hierarchical B-splines hb; thb while it can be achieved for PHT-splines pht and Analysis-suitable (and dual-compatible) T-splines ast only, respectively, by imposing reduced regularity and by endorsing a considerable propagation in the refinement ast1.

In this work we present a new refinement strategy to produce LR meshes with the NS property. In addition to the local linear independence of the associated LR B-splines, the strategy proposed has two further features: the space spanned coincides with the full space of spline functions and it guarantees smooth grading in the transitions between coarser and finer regions on the LR meshes produced. The former property boosts the approximation power with respect to the degrees of freedom as the spaces used for the discretization in the IgA context are in general just subsets of the spline space. Such a spanning completeness is more demanding to achieve in terms of meshing constraints and regularity, respectively, for (truncated) hierarchical B-splines and splines over T-meshes thb1; thb2; bressan2; ast2. The grading properties are instead required to theoretically ensure optimal algebraic rates of convergence in adaptive IgA methods axioms; b, even in presence of singularities in the PDE data or solution, similarly to what happens in Finite Element Methods (FEM) nochetto. More specifically, the LR meshes generated by the proposed strategy satisfy the requirements listed in the axioms of adaptivity axioms in terms of grading and overall appearance. Such axioms constitute a set of sufficient conditions to guarantee convergence at optimal algebraic rate in adaptive methods. Furthermore, mesh grading has been assumed to prove robust convergence of solvers for linear systems arising in the adaptive IgA framework with respect to mesh size and number of iterations hendrik. For these reasons, we have called the strategy Effective Grading (EG) refinement strategy.

The next sections are organized as follows. In Section 2 we recall the definitions of tensor product meshes and B-splines from a prospective that ease the introduction of LR meshes and LR B-splines. In the second part, we define the NS property for the LR meshes and provide the characterization for the local linear independence of the LR B-splines. In Section 3 we first define the EG strategy and then we prove that it has the NS property. The completeness of the space spanned and the grading of the LR meshes are discussed at the end of the section. Finally, in Section 4 we draw the conclusions and present the future research.

2 Preliminaries

In this section we recall the definition of Locally Refined (LR) meshes and B-splines and the conditions ensuring the local linear independence of the latter. We stick to the 2D setting for the sake of simplicity, however many of the following definitions have a direct generalization to any dimension, see tor for details. We assume the reader to be familiar with the definition and main properties of B-splines, in particular with the knot insertion procedure. An introduction to this topic can be found, e.g., in the review papers manni1; manni2 or in the classical books deboor and schumaker.

2.1 LR meshes and LR B-splines

LR meshes and related sets of LR B-splines are constituted simultaneously and iteratively from tensor meshes and sets of tensor B-splines. We recall that a tensor (product) mesh on an axes-aligned rectangular domain can be represented as a triplet where is a collection (with repetitions) of meshlines, which are the segments connecting two (and only two) vertices of a rectangular grid on , is a bidegree, that is, a pair of integers in , and is a map that counts the number of times any meshline appears in . is called multiplicity of the meshline . Furthermore, the following constraints are imposed on :

  • if are contiguous and aligned,

  • if is vertical and if is horizontal. In particular, we say that has full multiplicity if the equality holds.

A tensor mesh is open if the meshlines on have full multiplicities.

Given an open tensor mesh , consider another tensor mesh where is a sub-collection of meshlines forming a rectangular grid in a sub-domain of vertical lines and horizontal lines, where a line is counted times if the meshlines in it have multiplicity with respect to , as is such that for all . Such vertical and horizontal lines can be parametrized as and with and such that and and with appearing and times at most in and , respectively, because of the constraint C2 on . On and we can define a tensor (product) B-spline, . Then, we have that the support of is and hence is a tensor mesh in . We say that has minimal support on if no line in traverses entirely and on the meshlines of in the interior of . The collection of all the minimal support B-splines on constitutes the B-spline set on . If instead has not minimal support on , then there exists a line in entirely traversing which either is not in or it is in but its meshlines have a higher multiplicity with respect to than . In both cases, such exceeding line corresponds to extra knots either on the - or -direction. One could then express with B-splines of minimal support on by performing knot insertions. An example of B-spline with no minimal support on a tensor mesh is reported in Figure 1.

(a)

(b)
Figure 1: Example of B-spline with no minimal support on a tensor mesh. Let us consider the tensor mesh as in figure (a). Let also be the B-spline of bidegree

whose knot vectors are

and whose support and tensor mesh are highlighted in figure (a). has not minimal support on as the vertical line placed at value is traversing entirely while its meshlines in are not contained in . However, by knot insertion of in we can express in terms of two minimal support B-splines on , and , with . The supports of the latter partially overlap horizontally and are represented in figure (b).

Given now an open tensor mesh and the corresponding B-spline set , assume that we either

  • raise by one the multiplicity of a set of contiguous and colinear meshlines in , which, however, still has to satisfy the constraints C1–C2,

  • insert a new axis-aligned line with endpoints on , traversing the support of at least one B-spline , and extend to the segments connecting the intersection points of and , by setting it equal to 1 for such new meshlines.

Let be the new collection of meshlines and be the multiplicity for . By construction, there exists at least one B-spline that does not have minimal support on . By performing knot insertions we can however replace in the collection with B-splines of minimal support on . This creates a new set of B-splines of minimal support defined on . We are now ready to define (recursively) LR meshes and LR B-splines.

An LR mesh on is a triplet which either is a tensor mesh or it is obtained by applying the procedure R1 or R2 to which, in turn, is an LR mesh. The LR B-spline set on is the B-spline set on if the latter is a tensor mesh or, in case is not a tensor mesh, it is obtained via knot insertions from the LR B-spline set defined on .

In other words, we refine a coarse tensor mesh by inserting new lines (which possibly can have an endpoint in the interior of ), one at a time, or by raising the multiplicity of a line already on the mesh. On the initial tensor mesh we consider the tensor B-splines and whenever a B-spline in our collection has no longer minimal support during the mesh refinement process, we replace it by using the knot insertion procedure. The LR B-splines will be the final set of B-splines produced by this algorithm. In Figure 2 we illustrate the evolution of an LR B-spline throughout such process.

(a)
(b)
(c)
(d)
Figure 2: Evolution of an LR B-spline throughout the refinement process of a tensor mesh. Consider the tensor mesh reported in figure (a). Let be the minimal support B-spline whose support and tensor mesh are highlighted in figure (a). Let us insert a first vertical line in (dashed in figure (a)). This line does not traverses , hence is preserved in the B-spline set on the new LR mesh, as shown in figure (b). We then insert an horizontal line (dashed in figure (b)). This time the line is traversing and is replaced by the B-splines and involved in the knot insertion. In figure (c) we see the supports and tensor meshes of the latter on the new LR mesh. In particular we see that (the bottom B-spline in figure (c)) has not minimal support on the LR mesh as there is a vertical line traversing its support without being part of its tensor mesh. Thus is replaced as well, via knot insertion, by two other B-splines . Therefore, in the end, we move from , on the tensor mesh, to on the final LR mesh. The supports and tensor meshes of the latter are represented in figure (d).

We conclude this section with a short list of remarks:

  • In general the mesh refinement process producing a given LR mesh is not unique, as the insertion ordering can often be changed. However, the final LR B-spline set is well defined because independent of such insertion ordering, as proved in (tor, Theorem 3.4).

  • The LR B-spline set is in general only a subset of the set of minimal support B-spline defined on the LR mesh, although the two sets coincide on the initial tensor mesh. When inserting new lines the LR B-splines are the result of the knot insertion procedure, applied to LR B-splines defined on the previous LR mesh, while some minimal support B-splines could be created from scratch on the new LR mesh. Further details and examples can found in (lindep, Section 5).

  • We have introduced LR meshes and LR B-splines starting from open tensor meshes and related sets of tensor B-splines. It is actually not necessary that the initial tensor mesh is open, as long as it is possible to define at least one tensor B-spline on it. The openness was assumed indeed to verify this requirement.

  • In the next sections, we always consider tensor and LR meshes with boundary meshlines of full multiplicity and internal meshlines of multiplicity 1, if not specified otherwise. In particular, this means that we update the LR meshes and LR B-spline sets only by performing the procedure R2.

2.2 Local linear independence and NS-property

The LR B-splines coincide with the tensor B-splines when the underlying LR mesh is a tensor mesh and in general the formulation of LR B-splines remains broadly similar to that of tensor B-splines even though the former address local refinements. As a consequence, in addition to making them one of the most elegant extensions to achieve adaptivity, this similarity implies that many of the B-spline properties are preserved by the LR B-splines. For example, they are non-negative, have minimal support, are piecewise polynomials and can be expressed by the LR B-splines on finer LR meshes using non-negative coefficients (provided by the knot insertion procedure). Furthermore, it is possible to scale them by means of positive weights so that they also form a partition of unity, see (tor, Section 7).

However, as opposed to tensor B-splines, they could be not locally linearly independent. Actually, the set of LR B-splines can even be linearly dependent (examples can be found in tor; lindep; N2S2).

Nevertheless, in bressan1; bressan2 a characterization of the local linear independence of the LR B-splines has been provided in terms of meshing constraints leading to particular arrangements of the LR B-spline supports on the LR mesh. In this section we recall such characterization.

First of all, we introduce the concept of nestedness. Given an LR mesh , let be two different LR B-splines defined on . We say that is nested in if

  • ,

  • for all the meshlines of in .

An LR mesh where no LR B-spline is nested is said to have the Non-Nested-Support property, or in short the NS property. Figure 3 shows an example of an LR B-spline nested in another.

(a)
(b)
(c)
(d)
Figure 3: Example of nested LR B-splines on the LR mesh shown in (a). All the meshlines of have multiplicity 1 except those in the left edge, highlighted with a double line, which have multiplicity 2. In (b)–(d) three LR B-splines defined on , represented by means of their supports and tensor meshes. All the meshlines in and have multiplicity 1 except those on the left edge in which have multiplicity 2. Therefore, is nested in while is not nested neither in nor , despite that and , because the shared meshlines in the left edge of , and have multiplicity 2 in and multiplicity 1 in and .

The next result, from (bressan2, Theorem 4), relates the local linear independence of the LR B-splines to the NS property of the LR mesh. In order to present it, we recall that given an LR mesh , induces a box-partition of , that is, a collection of axes-aligned rectangles, called boxes, with disjoint interiors covering . Hereafter, we will just call them boxes of , with an abuse of notation, instead of boxes in the box-partition induced by .

Theorem 2.1.

Let be an LR mesh and let be the related LR B-spline set. The following statements are equivalent.

  1. The elements of are locally linearly independent.

  2. has the NS property.

  3. Any box of is contained in exactly LR B-spline supports, that is,

  4. The LR B-splines in form a partition of unity, without the use of scaling weights.

In the next section we present an algorithm to construct LR meshes with the NS property. The resulting LR meshes will furthermore show a nice gradual grading from coarser regions to finer regions, which avoids the thinning in some direction of the box sizes and the placing of small boxes side by side with large boxes.

3 The Effective grading refinement strategy

3.1 Definition and proof of the NS property

In this section we present a refinement strategy to generate LR meshes with the NS property. We call it Effective Grading (EG) refinement strategy as the finer regions smoothly fade towards the coarser regions in the resulting LR meshes.

To the best of our knowledge, two other strategies have been proposed to build LR meshes with the NS property so far: the Non-Nested-Support-Structured (NS) mesh refinement N2S2 and the Hierarchical Locally Refined (HLR) mesh refinement bressan2. The NS mesh refinement is a function-based refinement strategy, which means that at each iteration we refine those LR B-splines contributing more to the approximation error, in some norm. The NS mesh strategy does not require any condition on the LR B-splines selected for refinement to ensure the NS property of the resulting LR meshes. On the other hand, no grading has been proved on the final LR meshes and skinny elements may be present on them. The HLR refinement is instead a box-based strategy, which means that at each iteration the region to refine is identified by those boxes, in the box-partition induced by the LR mesh, in which a larger error is committed, in some norm. The HLR strategy produces nicely graded LR meshes but it requires that the regions to be refined and the maximal resolution have to be chosen a priori to ensure the NS property. Usually one does not know in advance where the error will be large and how fine the mesh has to be to reduce the error under a certain tolerance. Therefore, the conditions for the NS property constitute a drawback for the adoption of the HLR strategy in many practical purposes.

The EG refinement is a box-based strategy providing LR meshes very similar to those that one gets with the HLR strategy, when fixing the refinement regions and the number of iterations. As we shall show, the LR meshes generated will have the NS property under the reasonable assumption that the region of refinement at iteration is a sub-region of that considered at iteration . The EG strategy inserts new lines only along direction at iteration . In particular, the new lines are orthogonal to those inserted at iteration . The refinement can be schematized as follows.

EG strategy.

Given an LR mesh generated by applications of EG strategy, the associated set of LR B-splines and a region to be refined, iteration of the EG strategy consists of the following steps.

  1. Extend at both ends all the lines inserted at iteration (along direction ) to intersect more orthogonal meshlines (or possibly up to the domain’s boundary) at each end.

  2. Define as the set of the LR B-splines whose supports intersect the region , .

  3. Halve all the boxes of the tensor meshes associated to the LR B-splines in along the th direction, that is, horizontally if and vertically if .

(a)
(b)
(c)
(d)
(e)
(f)
Figure 4: Example of EG refinement iteration. Assume , odd and that after iterations of the strategy we still have a tensor mesh, as in figure (a). In (a)–(b) we show the refinement at iteration . In particular, in (a) we show the boxes selected for refinement and in (b) the mesh obtained at the end of the process. In figures (c)–(f) we show instead more in details iteration . In (c) we highlight the boxes marked for refinement. Note that these boxes form a sub-region of that selected in figure (a). This is the condition to ensure local linear independence in the LR B-spline set, as we shall show later. In (d) we extend all the lines inserted at iteration (step 1 of the strategy). In (e) we identify the LR B-spline supports, highlighted in the figure, containing the marked boxes (step 2 of the strategy). Finally, in (f) we halve the boxes in these supports along direction (step 3 of the strategy).

In Figure 4 we visually represent the steps of an iteration of the EG refinement on a given LR mesh. We remark that the LR meshes produced by the EG strategy have boundary meshlines of full multiplicity and internal meshlines of multiplicity 1.

In order to prove that such LR meshes have the NS property under the aforementioned conditions on , we rely on (bressan2, Theorem 11). This theorem states that if there is “enough separation”, in the th direction, between the boundaries of the regions refined at iteration and at iteration , then the NS property is guaranteed on the resulting LR mesh. Let and be such two regions. The required separation between the two boundaries is quantified by the so-called shadow map in the direction of . This map determines a superset of , larger only along the th direction, which (bressan2, Theorem 11) assumes to be contained into to ensure the NS property.

As a first step we therefore introduce the shadow map of a set in with respect to a tensor mesh, in some direction. The original definition of shadow map is given in (bressan2, Definition 10). In this paper we provide an equivalent, more constructive, definition to use in practice. We shall show that the two are equivalent in the appendix of this paper.

Let and be respectively a tensor mesh and a set in . We present the horizontal shadow map, the procedure for the vertical is analogous. For the sake of simplicity, let us assume first that has only one connected component. For any point we consider the two horizontal halflines from , and . Let be the intersection points of with the vertical meshlines of (counting their multiplicites), where is the closest to and the farthest. In particular, note that if lies on a vertical split of , then . We define

(1)

The horizontal shadow of with respect to , are the points in the segment . Then we define the horizontal shadow of with respect to , , as

If has more connected components, , then the shadow will be the union of the shadows of the connected components:

(a)
(b)
(c)
Figure 5: Examples of a horizontal shadow map of different sets with respect to a given tensor mesh . The red regions are the sets considered and the unions of the red regions and the blue regions are the shadow of them (we refer to the online version of the paper for the colors).

The subscript 1 indicates that is an horizontal shadow map, i.e., in the 1st direction. The vertical shadow map with respect to is denoted as instead. In Figure 5 we show three examples of horizontal shadow maps, with respect to a given tensor mesh , for three different sets and degree . In particular the sets considered are unions of boxes of . We made this choice because these are the kind of sets considered for refinement in practice.

We are now set to prove the NS property of the LR meshes produced by the EG refinement strategy.

Theorem 3.2.

Let be a sequence of LR-meshes such that is the boundary of and is obtained by refining in some region using the EG strategy in direction . If the sequence is such that then each has the NS property.

Proof.

Let be the sequence of tensor meshes with the boundary of and obtained by halving along direction the boxes in . By (bressan2, Theorem 11), we get the statement if in every we can find a sequence of nested regions such that and , where we have denoted as the shadow map with respect to in the direction to simplify the notation. Let be the B-spline sets on the tensor meshes and let

(2)

for any . Then, for a fixed , we set for all , and . We start by showing that for all , which in particular proves that , i.e., , as does not change the inclusion when applied at both sides. If is such that , then as well because . Such has been obtained via knot insertions when we have halved the boxes in the tensor mesh of some such that . Therefore, and .

We now show that for , that is, that if , where are the shadow in the directions and with respect to and respectively. First of all, we note that we actually only have to prove that , as the shadow map does not change the inclusions when applied at both sides. We have already proved that . It remains to show that for all . Without loss of generality, we can assume that is horizontal, that is, . Let be on a vertical edge of . Either is in the interior of or is on . In the latter case there is nothing to prove as would be also in and in . Let us then focus on the former case. We can assume without loss of generality that we go out of when moving horizontally to the left from , the mirrored case can be treated similarly. As is in a vertical edge of , there exists such that is in the left edge of . is obtained by knot insertions after vertically refine some of the B-splines in . In particular, there exist such that the right edges of and are at most one box-width of apart, see Figure 6.

(a)

(b)

(c)

(d)
Figure 6: Supporting figures for the proof of Theorem 3.2. Here we show what happens when and is a curve for the sake of simplicity. However the construction holds for any bidegree and . We illustrate the two possible configurations that can happen in (a)–(b) and (c)–(d) respectively. In figure (a) we see the mesh restricted to , for , and the region partially overlapping . In particular partially covers the second half (counting from right) of a box in the right-most column of boxes of . This fact implies that when we take and then with on the left edge of , we have that the right edge of and are one box-width of apart, as shown in figure (b). On the contrary, if does not touch the second half of any box in the right-most column of boxes in as shown in figure (c), then the right edges of and coincide.

As has minimal support, it cannot be entirely traversed by a line of that it is not in . This means that there are at least more vertical lines in on the left of (counting the multiplicities in case is on the left edge of ). Hence the segment with on the left horizontal halfline from and defined as in Equation (1) is contained in .

Consider now on an horizontal edge of . Then is either still in or contains the endpoint of such horizontal edge. As and , we have .

Summing up, we have that and as well, for any . By taking the union over all the , this means that , which implies that .

Finally, for any fixed, is obtained by first extending the refinement applied at iteration from to (step 1. of the EG strategy) and then by halving the boxes of in (step 3. of the EG strategy). This means that . ∎

Remark 3.3.

Also the LR meshes obtained in the same hypotheses of the theorem but swapping the directions between odd and even iterations, that is, from to , have the NS property.

In Figure 7 we show LR meshes obtained by performing 16 iterations (8 vertical and 8 horizontal insertions) of the EG strategy localized on fixed “random” regions.

(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figure 7: Examples of LR meshes after 16 iterations (8 vertical and 8 horizontal insertions) of EG refinement localized in fixed “random” regions, using bidegree .

3.2 Grading and spanning properties

In this section we present the further properties of the EG strategy. We first analyze the grading of the mesh and then we identify the space spanned by the related set of LR B-splines. More specifically, we show

  • bounds on the thinning of the boxes throughout the refinement,

  • bounds on the size ratio of adjacent boxes,

  • that the space spanned fulfills the ambient space of the spline functions on the LR mesh.

Without loss of generality, let be a square. Assume that is an LR mesh built using the EG strategy as in the hypotheses of Theorem 3.2. Then the aspect ratio of a box of is either or . Indeed, as the regions of refinement are nested, a box created at iteration can either be split in two at iteration , or it will be preserved until the end of the process. When there is only one box in the LR mesh, that is, , which is a square. When is odd, we have which means we halve the boxes horizontally. Hence, we produce rectangular boxes, of aspect ratio , from square boxes. When is even, we instead have , which mean we halve vertically some of the rectangular boxes produced at the previous iteration, thereby creating again square boxes (of aspect ratio ). Furthermore, we note that, by construction, a box of size in can be side by side only with boxes of size double/half in one or both dimensions, i.e., boxes of sizes and . These bounds on the box sizes and neighboring boxes avoid the thinning throughout the refinement process and guarantee smoothly grading transitions between finer and coarser regions of the LR meshes produced by the EG strategy. In particular, given two adjacent boxes of , called the square root of the area of and the length of the diagonals in , it holds

Inequalities (A1)–(A2) show that the box-partition associated to satisfies the shape regularity and local quasi uniformity conditions, which are two of the so-called axioms of adaptivity: a set of requirements which theoretically ensure optimal algebraic convergence rate in adaptive FEM and IgA, see axioms and (b, Sections 5–6) for details. In particular, conditions (A1)–(A2) is what is demanded in terms of grading and overall appearance of the mesh used for the discretization.

We now prove another important feature of the EG strategy: the space spanned is the entire spline space. The spline space on a given LR mesh , denoted by , is defined as

In general, all the spaces spanned by generalizations of the B-splines addressing adaptivity, such as LR spline spaces, are just subspaces of the spline space on the underlying mesh. The next result ensures that when we are using LR meshes generated by the EG strategy, the span of the LR B-splines actually fulfills the entire spline space.

Theorem 3.4.

Let be a sequence of LR-meshes obtained under the hypotheses of Theorem 3.2 and let be the associated sequence of LR B-spline sets. Then for all .

Proof.

When the LR B-spline set coincides with the tensor B-spline set and the statement is true by the Curry- Schoenberg Theorem. Fixed now , let again for be defined as in Equation (2). We recall that at iteration we have performed refinements along the th direction in . As is constituted of B-splines supports on , the newly inserted lines at iteration traverse at least orthogonal meshlines in , that is, the number of lines in the tensor meshes of such B-splines on along the th direction. By (bressan2, Theorem 12), this “length” of the newly inserted lines in terms of intersections with the previous LR mesh at each iteration guarantees that . ∎

Remark 3.5.

Also the LR B-splines defined on the LR meshes considered in Remark 3.3 span the entire spline space.

This spanning property is achieved also by using the HLR strategy bressan2.

4 Conclusion

We have presented a simple refinement strategy ensuring the local linear independence of the associated LR B-splines. Furthermore, the width of the regions refined at each iteration of the strategy guarantees that the span of the LR B-splines fulfills the whole spline space on the LR mesh.

We have called it Effective Grading (EG) strategy as the transition between coarser and finer regions is rather gradual and smooth in the LR meshes produced, with strict bounds on the aspect ratio of the boxes and on the sizes of the neighboring boxes. Such a grading ensures that the requirements on the mesh appearance listed in the axioms of adaptivity axioms; b

are verified. The latter are a set of sufficient conditions on mesh grading, refinement strategy, error estimates and approximant spaces in an adaptive numerical method to theoretically guarantee optimal algebraic convergence rate of the numerical solution to the real solution. The verification of the remaining axioms will be the topic of future research.

Acknowledgments

This work was partially supported by the European Council under the Horizon 2020 Project Energy oriented Centre of Excellence for computing applications - EoCoE, Project ID 676629. The author is member of Gruppo Nazionale per il Calcolo Scientifico, Istituto Nazionale di Alta Matematica.

Appendix A Shadow map

In this appendix we show that our definition of shadow map is equivalent to that given in (bressan2, Definition 10). In order to recall the latter, we introduce the separation distance. Given a direction , let be a tensor mesh and be the subcollection in of all the meshlines in the th direction. Given two points the separation distance of and along direction with respect to the tensor mesh is defined as

with the segment along direction between and . Given a set , we define

The definition of shadow map along direction with respect to the tensor mesh given in bressan2 is then

We use the gothic symbol to distinguish it from our definition of shadow map, given in Section 3. We now show that the two are equivalent. Let . Then, and . Let instead for some . Then and so . Therefore, we have proved that . We now show the opposite, that is, . Let . Then . If then such infimum is reached for some and so . If instead , the infimum is reached for which means that . In either cases, .

References