1 Introduction
Locally Refined (LR) Bsplines have been introduced in tor
as generalization of the tensor product Bsplines to achieve adaptivity in the discretization process. By allowing local insertions in the underlying mesh, the approximation efficiency is dramatically improved as one avoids the wasting of degrees of freedom by increasing the number of basis functions only where rapid and large variations occur in the analyzed object. Nevertheless, the adoption of LR Bsplines for simulation purposes in the Isogeometric Analysis (IgA) framework
iga is hindered by the risk of linear dependence relations lindep. Although a complete characterization of linear independence is still not available, the local linear independence of the basis functions is guaranteed when the underlying Locally Refined (LR) mesh has the socalled NonNestedSupport (NS) property bressan1; bressan2. The local linear independence not only avoids the hurdles of dealing with singular linear systems, but it also improves the sparsity of the matrices when assembling the numerical solution. Furthermore, it allows the construction of efficient quasiinterpolation schemes
N2S2. Such a strong property of the basis functions is a rarity, or at least it is quite onerous to gain, among the technologies used for adaptive IgA. For instance, it is not available for (truncated) hierarchical Bsplines hb; thb while it can be achieved for PHTsplines pht and Analysissuitable (and dualcompatible) Tsplines ast only, respectively, by imposing reduced regularity and by endorsing a considerable propagation in the refinement ast1.In this work we present a new refinement strategy to produce LR meshes with the NS property. In addition to the local linear independence of the associated LR Bsplines, the strategy proposed has two further features: the space spanned coincides with the full space of spline functions and it guarantees smooth grading in the transitions between coarser and finer regions on the LR meshes produced. The former property boosts the approximation power with respect to the degrees of freedom as the spaces used for the discretization in the IgA context are in general just subsets of the spline space. Such a spanning completeness is more demanding to achieve in terms of meshing constraints and regularity, respectively, for (truncated) hierarchical Bsplines and splines over Tmeshes thb1; thb2; bressan2; ast2. The grading properties are instead required to theoretically ensure optimal algebraic rates of convergence in adaptive IgA methods axioms; b, even in presence of singularities in the PDE data or solution, similarly to what happens in Finite Element Methods (FEM) nochetto. More specifically, the LR meshes generated by the proposed strategy satisfy the requirements listed in the axioms of adaptivity axioms in terms of grading and overall appearance. Such axioms constitute a set of sufficient conditions to guarantee convergence at optimal algebraic rate in adaptive methods. Furthermore, mesh grading has been assumed to prove robust convergence of solvers for linear systems arising in the adaptive IgA framework with respect to mesh size and number of iterations hendrik. For these reasons, we have called the strategy Effective Grading (EG) refinement strategy.
The next sections are organized as follows. In Section 2 we recall the definitions of tensor product meshes and Bsplines from a prospective that ease the introduction of LR meshes and LR Bsplines. In the second part, we define the NS property for the LR meshes and provide the characterization for the local linear independence of the LR Bsplines. In Section 3 we first define the EG strategy and then we prove that it has the NS property. The completeness of the space spanned and the grading of the LR meshes are discussed at the end of the section. Finally, in Section 4 we draw the conclusions and present the future research.
2 Preliminaries
In this section we recall the definition of Locally Refined (LR) meshes and Bsplines and the conditions ensuring the local linear independence of the latter. We stick to the 2D setting for the sake of simplicity, however many of the following definitions have a direct generalization to any dimension, see tor for details. We assume the reader to be familiar with the definition and main properties of Bsplines, in particular with the knot insertion procedure. An introduction to this topic can be found, e.g., in the review papers manni1; manni2 or in the classical books deboor and schumaker.
2.1 LR meshes and LR Bsplines
LR meshes and related sets of LR Bsplines are constituted simultaneously and iteratively from tensor meshes and sets of tensor Bsplines. We recall that a tensor (product) mesh on an axesaligned rectangular domain can be represented as a triplet where is a collection (with repetitions) of meshlines, which are the segments connecting two (and only two) vertices of a rectangular grid on , is a bidegree, that is, a pair of integers in , and is a map that counts the number of times any meshline appears in . is called multiplicity of the meshline . Furthermore, the following constraints are imposed on :

if are contiguous and aligned,

if is vertical and if is horizontal. In particular, we say that has full multiplicity if the equality holds.
A tensor mesh is open if the meshlines on have full multiplicities.
Given an open tensor mesh , consider another tensor mesh where is a subcollection of meshlines forming a rectangular grid in a subdomain of vertical lines and horizontal lines, where a line is counted times if the meshlines in it have multiplicity with respect to , as is such that for all . Such vertical and horizontal lines can be parametrized as and with and such that and and with appearing and times at most in and , respectively, because of the constraint C2 on . On and we can define a tensor (product) Bspline, . Then, we have that the support of is and hence is a tensor mesh in . We say that has minimal support on if no line in traverses entirely and on the meshlines of in the interior of . The collection of all the minimal support Bsplines on constitutes the Bspline set on . If instead has not minimal support on , then there exists a line in entirely traversing which either is not in or it is in but its meshlines have a higher multiplicity with respect to than . In both cases, such exceeding line corresponds to extra knots either on the  or direction. One could then express with Bsplines of minimal support on by performing knot insertions. An example of Bspline with no minimal support on a tensor mesh is reported in Figure 1.
whose knot vectors are
and whose support and tensor mesh are highlighted in figure (a). has not minimal support on as the vertical line placed at value is traversing entirely while its meshlines in are not contained in . However, by knot insertion of in we can express in terms of two minimal support Bsplines on , and , with . The supports of the latter partially overlap horizontally and are represented in figure (b).Given now an open tensor mesh and the corresponding Bspline set , assume that we either

raise by one the multiplicity of a set of contiguous and colinear meshlines in , which, however, still has to satisfy the constraints C1–C2,

insert a new axisaligned line with endpoints on , traversing the support of at least one Bspline , and extend to the segments connecting the intersection points of and , by setting it equal to 1 for such new meshlines.
Let be the new collection of meshlines and be the multiplicity for . By construction, there exists at least one Bspline that does not have minimal support on . By performing knot insertions we can however replace in the collection with Bsplines of minimal support on . This creates a new set of Bsplines of minimal support defined on . We are now ready to define (recursively) LR meshes and LR Bsplines.
An LR mesh on is a triplet which either is a tensor mesh or it is obtained by applying the procedure R1 or R2 to which, in turn, is an LR mesh. The LR Bspline set on is the Bspline set on if the latter is a tensor mesh or, in case is not a tensor mesh, it is obtained via knot insertions from the LR Bspline set defined on .
In other words, we refine a coarse tensor mesh by inserting new lines (which possibly can have an endpoint in the interior of ), one at a time, or by raising the multiplicity of a line already on the mesh. On the initial tensor mesh we consider the tensor Bsplines and whenever a Bspline in our collection has no longer minimal support during the mesh refinement process, we replace it by using the knot insertion procedure. The LR Bsplines will be the final set of Bsplines produced by this algorithm. In Figure 2 we illustrate the evolution of an LR Bspline throughout such process.
We conclude this section with a short list of remarks:

In general the mesh refinement process producing a given LR mesh is not unique, as the insertion ordering can often be changed. However, the final LR Bspline set is well defined because independent of such insertion ordering, as proved in (tor, Theorem 3.4).

The LR Bspline set is in general only a subset of the set of minimal support Bspline defined on the LR mesh, although the two sets coincide on the initial tensor mesh. When inserting new lines the LR Bsplines are the result of the knot insertion procedure, applied to LR Bsplines defined on the previous LR mesh, while some minimal support Bsplines could be created from scratch on the new LR mesh. Further details and examples can found in (lindep, Section 5).

We have introduced LR meshes and LR Bsplines starting from open tensor meshes and related sets of tensor Bsplines. It is actually not necessary that the initial tensor mesh is open, as long as it is possible to define at least one tensor Bspline on it. The openness was assumed indeed to verify this requirement.

In the next sections, we always consider tensor and LR meshes with boundary meshlines of full multiplicity and internal meshlines of multiplicity 1, if not specified otherwise. In particular, this means that we update the LR meshes and LR Bspline sets only by performing the procedure R2.
2.2 Local linear independence and NSproperty
The LR Bsplines coincide with the tensor Bsplines when the underlying LR mesh is a tensor mesh and in general the formulation of LR Bsplines remains broadly similar to that of tensor Bsplines even though the former address local refinements. As a consequence, in addition to making them one of the most elegant extensions to achieve adaptivity, this similarity implies that many of the Bspline properties are preserved by the LR Bsplines. For example, they are nonnegative, have minimal support, are piecewise polynomials and can be expressed by the LR Bsplines on finer LR meshes using nonnegative coefficients (provided by the knot insertion procedure). Furthermore, it is possible to scale them by means of positive weights so that they also form a partition of unity, see (tor, Section 7).
However, as opposed to tensor Bsplines, they could be not locally linearly independent. Actually, the set of LR Bsplines can even be linearly dependent (examples can be found in tor; lindep; N2S2).
Nevertheless, in bressan1; bressan2 a characterization of the local linear independence of the LR Bsplines has been provided in terms of meshing constraints leading to particular arrangements of the LR Bspline supports on the LR mesh. In this section we recall such characterization.
First of all, we introduce the concept of nestedness. Given an LR mesh , let be two different LR Bsplines defined on . We say that is nested in if

,

for all the meshlines of in .
An LR mesh where no LR Bspline is nested is said to have the NonNestedSupport property, or in short the NS property. Figure 3 shows an example of an LR Bspline nested in another.
The next result, from (bressan2, Theorem 4), relates the local linear independence of the LR Bsplines to the NS property of the LR mesh. In order to present it, we recall that given an LR mesh , induces a boxpartition of , that is, a collection of axesaligned rectangles, called boxes, with disjoint interiors covering . Hereafter, we will just call them boxes of , with an abuse of notation, instead of boxes in the boxpartition induced by .
Theorem 2.1.
Let be an LR mesh and let be the related LR Bspline set. The following statements are equivalent.

The elements of are locally linearly independent.

has the NS property.

Any box of is contained in exactly LR Bspline supports, that is,

The LR Bsplines in form a partition of unity, without the use of scaling weights.
In the next section we present an algorithm to construct LR meshes with the NS property. The resulting LR meshes will furthermore show a nice gradual grading from coarser regions to finer regions, which avoids the thinning in some direction of the box sizes and the placing of small boxes side by side with large boxes.
3 The Effective grading refinement strategy
3.1 Definition and proof of the NS property
In this section we present a refinement strategy to generate LR meshes with the NS property. We call it Effective Grading (EG) refinement strategy as the finer regions smoothly fade towards the coarser regions in the resulting LR meshes.
To the best of our knowledge, two other strategies have been proposed to build LR meshes with the NS property so far: the NonNestedSupportStructured (NS) mesh refinement N2S2 and the Hierarchical Locally Refined (HLR) mesh refinement bressan2. The NS mesh refinement is a functionbased refinement strategy, which means that at each iteration we refine those LR Bsplines contributing more to the approximation error, in some norm. The NS mesh strategy does not require any condition on the LR Bsplines selected for refinement to ensure the NS property of the resulting LR meshes. On the other hand, no grading has been proved on the final LR meshes and skinny elements may be present on them. The HLR refinement is instead a boxbased strategy, which means that at each iteration the region to refine is identified by those boxes, in the boxpartition induced by the LR mesh, in which a larger error is committed, in some norm. The HLR strategy produces nicely graded LR meshes but it requires that the regions to be refined and the maximal resolution have to be chosen a priori to ensure the NS property. Usually one does not know in advance where the error will be large and how fine the mesh has to be to reduce the error under a certain tolerance. Therefore, the conditions for the NS property constitute a drawback for the adoption of the HLR strategy in many practical purposes.
The EG refinement is a boxbased strategy providing LR meshes very similar to those that one gets with the HLR strategy, when fixing the refinement regions and the number of iterations. As we shall show, the LR meshes generated will have the NS property under the reasonable assumption that the region of refinement at iteration is a subregion of that considered at iteration . The EG strategy inserts new lines only along direction at iteration . In particular, the new lines are orthogonal to those inserted at iteration . The refinement can be schematized as follows.
EG strategy.
Given an LR mesh generated by applications of EG strategy, the associated set of LR Bsplines and a region to be refined, iteration of the EG strategy consists of the following steps.

Extend at both ends all the lines inserted at iteration (along direction ) to intersect more orthogonal meshlines (or possibly up to the domain’s boundary) at each end.

Define as the set of the LR Bsplines whose supports intersect the region , .

Halve all the boxes of the tensor meshes associated to the LR Bsplines in along the th direction, that is, horizontally if and vertically if .
In Figure 4 we visually represent the steps of an iteration of the EG refinement on a given LR mesh. We remark that the LR meshes produced by the EG strategy have boundary meshlines of full multiplicity and internal meshlines of multiplicity 1.
In order to prove that such LR meshes have the NS property under the aforementioned conditions on , we rely on (bressan2, Theorem 11). This theorem states that if there is “enough separation”, in the th direction, between the boundaries of the regions refined at iteration and at iteration , then the NS property is guaranteed on the resulting LR mesh. Let and be such two regions. The required separation between the two boundaries is quantified by the socalled shadow map in the direction of . This map determines a superset of , larger only along the th direction, which (bressan2, Theorem 11) assumes to be contained into to ensure the NS property.
As a first step we therefore introduce the shadow map of a set in with respect to a tensor mesh, in some direction. The original definition of shadow map is given in (bressan2, Definition 10). In this paper we provide an equivalent, more constructive, definition to use in practice. We shall show that the two are equivalent in the appendix of this paper.
Let and be respectively a tensor mesh and a set in . We present the horizontal shadow map, the procedure for the vertical is analogous. For the sake of simplicity, let us assume first that has only one connected component. For any point we consider the two horizontal halflines from , and . Let be the intersection points of with the vertical meshlines of (counting their multiplicites), where is the closest to and the farthest. In particular, note that if lies on a vertical split of , then . We define
(1) 
The horizontal shadow of with respect to , are the points in the segment . Then we define the horizontal shadow of with respect to , , as
If has more connected components, , then the shadow will be the union of the shadows of the connected components:
The subscript 1 indicates that is an horizontal shadow map, i.e., in the 1st direction. The vertical shadow map with respect to is denoted as instead. In Figure 5 we show three examples of horizontal shadow maps, with respect to a given tensor mesh , for three different sets and degree . In particular the sets considered are unions of boxes of . We made this choice because these are the kind of sets considered for refinement in practice.
We are now set to prove the NS property of the LR meshes produced by the EG refinement strategy.
Theorem 3.2.
Let be a sequence of LRmeshes such that is the boundary of and is obtained by refining in some region using the EG strategy in direction . If the sequence is such that then each has the NS property.
Proof.
Let be the sequence of tensor meshes with the boundary of and obtained by halving along direction the boxes in . By (bressan2, Theorem 11), we get the statement if in every we can find a sequence of nested regions such that and , where we have denoted as the shadow map with respect to in the direction to simplify the notation. Let be the Bspline sets on the tensor meshes and let
(2) 
for any . Then, for a fixed , we set for all , and . We start by showing that for all , which in particular proves that , i.e., , as does not change the inclusion when applied at both sides. If is such that , then as well because . Such has been obtained via knot insertions when we have halved the boxes in the tensor mesh of some such that . Therefore, and .
We now show that for , that is, that if , where are the shadow in the directions and with respect to and respectively. First of all, we note that we actually only have to prove that , as the shadow map does not change the inclusions when applied at both sides. We have already proved that . It remains to show that for all . Without loss of generality, we can assume that is horizontal, that is, . Let be on a vertical edge of . Either is in the interior of or is on . In the latter case there is nothing to prove as would be also in and in . Let us then focus on the former case. We can assume without loss of generality that we go out of when moving horizontally to the left from , the mirrored case can be treated similarly. As is in a vertical edge of , there exists such that is in the left edge of . is obtained by knot insertions after vertically refine some of the Bsplines in . In particular, there exist such that the right edges of and are at most one boxwidth of apart, see Figure 6.
As has minimal support, it cannot be entirely traversed by a line of that it is not in . This means that there are at least more vertical lines in on the left of (counting the multiplicities in case is on the left edge of ). Hence the segment with on the left horizontal halfline from and defined as in Equation (1) is contained in .
Consider now on an horizontal edge of . Then is either still in or contains the endpoint of such horizontal edge. As and , we have .
Summing up, we have that and as well, for any . By taking the union over all the , this means that , which implies that .
Finally, for any fixed, is obtained by first extending the refinement applied at iteration from to (step 1. of the EG strategy) and then by halving the boxes of in (step 3. of the EG strategy). This means that . ∎
Remark 3.3.
Also the LR meshes obtained in the same hypotheses of the theorem but swapping the directions between odd and even iterations, that is, from to , have the NS property.
In Figure 7 we show LR meshes obtained by performing 16 iterations (8 vertical and 8 horizontal insertions) of the EG strategy localized on fixed “random” regions.
3.2 Grading and spanning properties
In this section we present the further properties of the EG strategy. We first analyze the grading of the mesh and then we identify the space spanned by the related set of LR Bsplines. More specifically, we show

bounds on the thinning of the boxes throughout the refinement,

bounds on the size ratio of adjacent boxes,

that the space spanned fulfills the ambient space of the spline functions on the LR mesh.
Without loss of generality, let be a square. Assume that is an LR mesh built using the EG strategy as in the hypotheses of Theorem 3.2. Then the aspect ratio of a box of is either or . Indeed, as the regions of refinement are nested, a box created at iteration can either be split in two at iteration , or it will be preserved until the end of the process. When there is only one box in the LR mesh, that is, , which is a square. When is odd, we have which means we halve the boxes horizontally. Hence, we produce rectangular boxes, of aspect ratio , from square boxes. When is even, we instead have , which mean we halve vertically some of the rectangular boxes produced at the previous iteration, thereby creating again square boxes (of aspect ratio ). Furthermore, we note that, by construction, a box of size in can be side by side only with boxes of size double/half in one or both dimensions, i.e., boxes of sizes and . These bounds on the box sizes and neighboring boxes avoid the thinning throughout the refinement process and guarantee smoothly grading transitions between finer and coarser regions of the LR meshes produced by the EG strategy. In particular, given two adjacent boxes of , called the square root of the area of and the length of the diagonals in , it holds
Inequalities (A1)–(A2) show that the boxpartition associated to satisfies the shape regularity and local quasi uniformity conditions, which are two of the socalled axioms of adaptivity: a set of requirements which theoretically ensure optimal algebraic convergence rate in adaptive FEM and IgA, see axioms and (b, Sections 5–6) for details. In particular, conditions (A1)–(A2) is what is demanded in terms of grading and overall appearance of the mesh used for the discretization.
We now prove another important feature of the EG strategy: the space spanned is the entire spline space. The spline space on a given LR mesh , denoted by , is defined as
In general, all the spaces spanned by generalizations of the Bsplines addressing adaptivity, such as LR spline spaces, are just subspaces of the spline space on the underlying mesh. The next result ensures that when we are using LR meshes generated by the EG strategy, the span of the LR Bsplines actually fulfills the entire spline space.
Theorem 3.4.
Let be a sequence of LRmeshes obtained under the hypotheses of Theorem 3.2 and let be the associated sequence of LR Bspline sets. Then for all .
Proof.
When the LR Bspline set coincides with the tensor Bspline set and the statement is true by the Curry Schoenberg Theorem. Fixed now , let again for be defined as in Equation (2). We recall that at iteration we have performed refinements along the th direction in . As is constituted of Bsplines supports on , the newly inserted lines at iteration traverse at least orthogonal meshlines in , that is, the number of lines in the tensor meshes of such Bsplines on along the th direction. By (bressan2, Theorem 12), this “length” of the newly inserted lines in terms of intersections with the previous LR mesh at each iteration guarantees that . ∎
Remark 3.5.
Also the LR Bsplines defined on the LR meshes considered in Remark 3.3 span the entire spline space.
This spanning property is achieved also by using the HLR strategy bressan2.
4 Conclusion
We have presented a simple refinement strategy ensuring the local linear independence of the associated LR Bsplines. Furthermore, the width of the regions refined at each iteration of the strategy guarantees that the span of the LR Bsplines fulfills the whole spline space on the LR mesh.
We have called it Effective Grading (EG) strategy as the transition between coarser and finer regions is rather gradual and smooth in the LR meshes produced, with strict bounds on the aspect ratio of the boxes and on the sizes of the neighboring boxes. Such a grading ensures that the requirements on the mesh appearance listed in the axioms of adaptivity axioms; b
are verified. The latter are a set of sufficient conditions on mesh grading, refinement strategy, error estimates and approximant spaces in an adaptive numerical method to theoretically guarantee optimal algebraic convergence rate of the numerical solution to the real solution. The verification of the remaining axioms will be the topic of future research.
Acknowledgments
This work was partially supported by the European Council under the Horizon 2020 Project Energy oriented Centre of Excellence for computing applications  EoCoE, Project ID 676629. The author is member of Gruppo Nazionale per il Calcolo Scientifico, Istituto Nazionale di Alta Matematica.
Appendix A Shadow map
In this appendix we show that our definition of shadow map is equivalent to that given in (bressan2, Definition 10). In order to recall the latter, we introduce the separation distance. Given a direction , let be a tensor mesh and be the subcollection in of all the meshlines in the th direction. Given two points the separation distance of and along direction with respect to the tensor mesh is defined as
with the segment along direction between and . Given a set , we define
The definition of shadow map along direction with respect to the tensor mesh given in bressan2 is then
We use the gothic symbol to distinguish it from our definition of shadow map, given in Section 3. We now show that the two are equivalent. Let . Then, and . Let instead for some . Then and so . Therefore, we have proved that . We now show the opposite, that is, . Let . Then . If then such infimum is reached for some and so . If instead , the infimum is reached for which means that . In either cases, .
Comments
There are no comments yet.