1 Introduction
Gradient recovery schemes for data defined in Euclidean domain have been intensively investigated [3, 22, 1, 14, 16, 23, 24, 25], and also find many interesting applications, e.g. [13, 19, 18, 15, 5]. The methods for data on discretized manifolds has been recently studied, e.g., in [12, 21, 13]
. By looking into the literature, many of the recovery algorithms in the setting of Euclidean domain can be generalized to manifolds setting. However, it has some restriction that the exact geometryprior (exact vertices, exact normal vectors) are needed either in the designing of the algorithms or in proving the superconvergence.
In the paper [21], Wei, Chen and Huang asked the following two questions: (i) How to design gradient recovery algorithms given no exact information of the surfaces (i.e., no exact normal fields, and no exact vertices); (ii) If such a recovery can be designed, then does it have superconvergence with triangulated meshes whose vertices located in a neighborhoods of the underline exact surfaces, where is the scale of the mesh size. In [10], the authors proposed a recovery scheme called parametric polynomial preserving recovery method, which does not rely on the exact geometryprior, and it was proven to be able to achieve superconvergence under mildly structured meshes, including high curvature cases. That can be thought of as an answer to the first question. Still, the theoretical proof for the superconvergence there requires the vertices to be located on the exact manifolds, though numerically the superconvergence has been observed even the vertices not sitting on the exact manifolds.
This paper aims to solve this problem. To do this, we introduce the concept of geometric supercloseness, which can be theoretically justified under the vertex condition. Another contribution is that we generalize the idea in [10] of using a parametric domain for polynomial preserving recovery on manifolds to other standard recovery schemes [21]. In this vein, we develop a family of parametric recovery schemes for data on discretized manifolds. It consists of twolevel recoveries: one is recovering the Jacobian of local geometric mapping over the parametric domain, and the other is recovering the data gradient isoparametrically to the geometric parametrization.
The paper is organized as follows: In Section 2, we describe the general geometric setting of this paper, where we prove the geometric supercloseness result. In Section 3, we provide the framework for the family of isoparametric gradient recovery schemes. In Section 4, we prove the superconvergence of the recovery scheme. Finally, we show numerical examples which verifies the theoretical result in Section 5.
2 Geometric supercloseness
We first specify in the beginning of this section some of the geometrical objectives and notations involved in the paper, and then prove a property called geometric supercloseness from the approximation conditions.
2.1 Geometric setting
is a general dimensional smooth compact hypersurface embedded in , and is a polytope approximation of , with the maximum diameter of the triangles . Here and are the index sets for triangles and vertices of , respectively. We denote the corresponding curved triangles which satisfy . Note that the vertices of do not necessarily locate on , therefore and may have no common vertices. In the following study, we introduce to be the counterpart of with the same number of vertices, all of which are located on . To obtain , we project the vertices of along unit normal direction of to have the vertices of . Then we connect using the same order as the connection of , which gives the triangulation of . denote the corresponding triangles on . We focus on linear finite element methods in order to better illustrating the idea, thus the nodes consist of simply all the vertices of .
In [21], gradient recovery methods are generalized from planar domain to surfaces, while it is restricted to the case that the vertices are located on the underline exact surface. That is, it has been only studied the case that the discretization are given by, corresponding to our notation, . It has been, however, conjectured that the superconvergence of gradient recovery on general discretized surfaces, like , may be proven if the vertices of are in a neighborhood of the corresponding vertices of . That is the following vertexdeviation condition
(2.1) 
We recall the transform operators between the function spaces on and on (or similarly ). Let and be some ansatz function spaces, then we define the transform operators
(2.2) 
where is an affine map from every element in to the corresponding element in . The operators between functions on and are similarly defined.
In the following analysis, at each vertex , a local parametrization function is needed, which maps an open set in the parameter domain to an open set around on the manifold. Note that we take a compact set which is the parameter domain corresponding to the selected patch on around the vertex , or respectively the patch on around the vertex .
In such a way, we define local parametrization functions and , respectively. They are piecewise linear functions. In addition, we use and to denote the local parameterizations from the small triangle pairs and to , respectively. Due to the smoothness assumption on , , and all are functions for every and . Particularly, it implies that and and , respectively. The condition (2.1) indicates that triangulated surface converges to as .
2.2 Supercloseness of geometric approximation
In this part, we prove some approximation properties on the paired triangular meshes and . We introduce some relevant definitions first.
Definition 2.1.
Suppose and are two adjacent triangles in , as illustrated in Figure 1. They are said to form an parallelogram if
Definition 2.2.
A triangular mesh is said to satisfy the irregular condition if there exist a partition of and a positive constant such that every two adjacent triangles in form an parallelogram and
Before going further, we make some standard assumptions on the meshes.
Assumption 2.3.

The triangulation is shape regular and quasiuniform. Moreover, the irregular condition holds for .

and have the same amount of triangles and vertices, and every vertex pair of and satisfy the deviation bound in (2.1).
The first condition of Assumption 2.3 is quite standard, and it is crucial for proving superconvergence in the literature, e.g., [3, 22, 21]. However, in the manifold setting, this condition has been assumed on
, that is the exact interpolation of
. In the following, we show that with Assumption 2.3, it implies a geometric approximation property which we call it geometric supercloseness. We need some auxiliary results which are given in the following lemma.Lemma 2.4.
Let and both be shape regular triangles of diameter , and the distance between their vertexpairs is of order , where . Let be the affine transformation, and its Jacobian consists of orthogonal vectors . Then we have following relations:
(2.3) 
and
(2.4) 
Proof.
Since is a bounded linear map and has a bounded generalized inverse, we have
Here is the identity matrix. Due to the orthogonality, we have
where denotes the diagonal matrix with the element on its diagonals.
Because of our assumption on the vertices of and , we have that
for some constant either positive or negative. By definition of matrix norm, we have .Thus we have
and this give the relation in (2.3). For (2.4), notice . Then . Due to the area difference of the two triangles and , we have
for sufficiently small. ∎
Note that the orthogonality of
is not essential. This is because if we take other local bases for computing the Jacobian, then the new tensor matrix will be equivalent to the diagonal matrix. Precisely we have
for , where is a diagonal matrix, and is a symmetric matrix, andis some unitary matrix.
Proposition 2.5.
Let and satisfy Assumption 2.3, then we have

The triangulation is also shape regular and quasiuniform, and the irregular condition is fulfilled for .

The local piecewise linear parametrization functions, and , satisfy
(2.5) Here is the area functional, and is the patch corresponding to the parameter domain .
Proof.
The first assertion is not hard to verify by considering the definition of shape regular and quasiuniform, which can be found in many textbooks of finite element methods [4, 6], and also the definition of the irregular condition above.
Now we prove the second assertion. Let be the parameter domain for patches selected around the vertex . Denote the local indexset associated to vertices of the selected patch around to be . We notice that both and are piecewise linear functions defined on . Let be the common parameter domain for the corresponding triangle pairs and for the index . Then and are linear functions on each of the regions for every .
Thus the Jacobian and will be constant functions, and we have
Due to the geometric regularity assumption that has bounded curvature, thus all the elements are uniformly bounded from below and from above given . On the other hand, is also a constant function. It is the Jacobian of the linear geometric function which maps to . We denote this map by . Then we have . Therefore we have
Apply the result from Lemma 2.4, we have
Summing up over we arrive the following inequality
Taking square root on both sides gives the conclusion of the second assertion. ∎
The relation in (2.5) tells in fact some regularity on the approximations of to . It is similar to the supercloseness property for the the gradient of the finite element solutions to the interpolation of the exact solutions.
3 Isoparametric gradient recovery schemes on manifolds
Here, we generalize the idea of parametric polynomial preserving recovery proposed in [10] to have a general family of recovery methods in manifolds setting. More precisely, the algorithm framework we provide here generalize of the methods introduced in [21], which ask for exact geometryprior, to the case without exact geometryprior. To do this, we rely on the intrinsic definition of gradient operator on manifolds. Given a local parametric patch, and the parametrization function of this patch, define , then we have
(3.1) 
Here is the pseudoinverse of the Jacobian . For a small digest of differential operator on manifolds, we refer to the appendix of [11] and also the background part in [10]. One may refer to the textbooks, e.g., [9] for more comprehensive introduction on differential geometry. From (3.1), we have the idea that to recover the gradient on manifolds using a twolevel strategy. That is to recover the Jacobian and also isoparametrically on every local patch. We call the new methods Isoparametric gradient recovery schemes. They ask for neither the exact vertices nor the precise tangent spaces. In particular the PPPR method introduced in [10] can also be put under the same umbrella.
Precisely, we have the following framework described in Algorithm 1 for a family of recovery methods. For simplicity, we focus on dimensional cases.
Input: Discretized triangular surface and the data (FEM solutions) . Then repeat steps for all .

At every , select a local patch around with sufficient vertices. Compute the unit normal vectors of every triangle faces in . Compute the simple (weighted) average of the unit normal vectors, and normalize it to be . Take the orthogonal space to to be the parametric domain . Shift to be the origin of , and choose the orthonormal basis of .

Project all selected vertices of onto the parametric domain from Step , and record the new coordinates as .

Use a planar recovery scheme to recover the surface Jacobian with respect to . Typically, we consider every surface patch as local graph of some function , that is . Then the recovered Jacobian at the selected patch is , where is the identity matrix of the dimension .

For every , using the same planar recovery scheme to recover its gradient with respect to parameter domain .

In the spirit of (3.1), use the results from Step and Step to get the recovered gradient at :
(3.2) where . The orthonormal basis is multiplied to unify the coordinates from local ones to a global one in the ambient Euclidean space.
Output: The recovered gradient at selected nodes . For being not a vertex of triangles, we use linear finite element basis to interpolate the values at vertices of each triangle.
Note that the nodes for the recovered gradients are not necessarily the vertices of the triangles, e.g., if one use the scheme for the local recovery at Step , and Step . On the other hand, since there has no concrete recovery schemes been specified in Step , and Step , almost all the local recovery methods for functions in Euclidean domain can be used. Though the preferred candidates will be scheme and PPR. The latter gives then the PPPR method proposed in [10]. In view of (3.2), one can see that it is an approximation of (3.1) at every nodes: recovers , and recovers . We can also have the intuition that the result in (2.5) is required in order to match the superconvergence of the function gradient recovery and the superconvergence of the geometry Jacobian recovery, simultaneously.
In the next section, we will show the superconvergence property of the recovery scheme.
4 Superconvergence analysis with no exact geometryprior
Even though we have shown a general algorithmic framework, which can cover several different methods under the same umbrella, for the theoretical analysis, we will focus on parametric polynomial preserving recovery scheme. It asks for less requirements on the meshes in comparison with several other methods, e.g., for simple (weighted) average, or generalized ZZscheme, an additional symmetric condition is required [10]. However, the general idea of the proof is extendable to the other methods as well.
We take the following LaplaceBeltrami equation as our model problem to implement the analysis.
(4.1) 
The weak formulation of equation (4.1) is given as follows: Find with such that
(4.2) 
The regularity of the solutions has been proved in [2, Chapter 4]. In the finite element methods, the surface is approximated by the triangulation which satisfy Assumption 2.3, and the solution is simulated in the piecewise linear function spaces defined over ,
(4.3) 
We first show that there exists an underline smooth manifold denoted by so that can be thought as an interpolation of it. This intermediate manifold is not needed practically in the algorithm, but it is helpful for our error analysis.
Proposition 4.1.
Let be the precise manifold, and and satisfy the assumption 2.3. Then the following statements hold true:

There exists a smooth manifold , so that is a linear interpolation of at the vertices. Moreover, the Jacobian of the local geometric mapping at each vertex equals to the recovered geometry Jacobian using gradient recovery method.

Let and be the parametrization of the curved triangular surfaces and from the triangle
, respectively. Then there is the estimate
(4.4) where is a constant independent of .

Let be functions in , and be the pullback of to , then we have
(4.5) for some constants .
Proof.
For the first statement, we design the following algorithm to construct the smooth manifold . To simplify the illustration, we focus again on dimensional manifolds.

At each vertex of , we use PPR algorithm [23] to recover the local geometry as a graph of a scalar function for .

For each triangle with , we build a local coordinate system, take the barycenter of as the origin, and transfer the recovered functions associated to to the new local coordinate, individually. Also, the gradient of at the vertex is calculated using the new coordinates on .

We use a order polynomial to fit the function and gradient values over . The data are function values at the vertices (in fact with function value ), and the gradient values of at each vertex which contribute directional derivative values on the edges of . This gives linearly independent equations.

We put another constraint that the local polynomial value at the barycenter of every triangle matches the function value whose graph is the patch at the normal cross with the barycenter of .

We have now linear independent equations in total on each triangle , thus a order local polynomial is uniquely determined.

On every edge of the triangles, it is a one dimensional order polynomial function which is uniquely determined by the vertices and the directional derivatives conditions. Note that polynomials at neighbored triangle edges are invariant under affine coordinate transformation. Therefore the local polynomials matches each other at every edge of neighbored triangles.

Going through all the indexes with the described algorithm. This gives us a closed, elementwise order polynomial patches which we denote it by .
To have the global smoothness, consider mollifier operator, and . This in fact guarantees that is smooth.
For the second statement, we notice the following relation:
(4.6)  
Here we take the parameter domain for both and . and are the local geometric mapping from the linear interpolations of and at the vertices of , respectively. is the local PPR gradient recovery operator.
The first and the third terms on the righthand side of (4.6) can be estimated using polynomial preserving properties of and the smoothness of the functions and , which gives
(4.7) 
Here is realized using the linear interpolation of the recovered gradients of at the every vertices of transformed under the local coordinates on . The second term on the righthand side of (4.6) can be estimated from Lemma 4.2 and the boundedness of [17]:
(4.8) 
Since and both are uniformly bounded, combining (4.7) and (4.8) and returning to (4.6) give the estimate
For the equivalence (4.5) we can use the results in [7, page 811], which is able to show the equivalence on each triangle pairs of and , that is
for some constants and . The equivalence for functions defined on triangle pairs of and is similarly shown. Then we arrive the following
with constants and . Since as , we have as well. This tells that as , which indicates that the constants and are uniformly bounded. Then we derive the equivalence in (4.5). ∎
In order to prove the superconvergence in the case when the vertices of are not located exactly on , but in a neighborhood around it, we use the following estimate.
Lemma 4.2.
Proof.
Recall (3.1) for the definition of gradient in the local parametric domain, particularly, we take the local parametric domain to be . Then we have for every
(4.10)  
Using the estimate (4.4) from Proposition 4.1, we derive that
With the above estimate back to (4.10), we achieve the conclusion by summing over all the index and then taking the square root. ∎
Now we are ready to show the superconvergence of the gradient recovery on , which is considered to be an answer to the open question in [21].
Theorem 4.3.
Proof.
This is readily shown using the triangle inequality
(4.12) 
The first part on the right hand side of (4.12) is estimated using Lemma 4.2:
(4.13) 
Assumption 2.3, Proposition 2.5 and Proposition 4.1 ensure that the geometric assumptions of [10, Theorem 5.3] is satisfied, i.e., the irregular condition, and the vertices of is located on which is smooth. Then the second term on the right hand side of (4.12) is estimated using [10, Theorem 5.3]. That gives
The equivalence relation from (4.5) in Proposition 4.1 gives the estimate on , which is
(4.14) 
Using embedding theorem that the righthand side of (4.13) can actually be bounded by the first term on the righthand side of (4.14). The proof is concluded by putting (4.13) and (4.14) together. ∎
5 Numerical results
In this section, we present two numerical examples to verify the theoretical analysis. The first example is on unit sphere, where we add artificial perturbation to the discretized mesh in order to verify our theoretical results. With this, we are able to verify the geometric approximation of to . The second example is on a general surface, where the vertices of its discretization mesh do not located on the exact surface. The initial mesh of the general surface was generated using the threedimensional surface mesh generation module of the Computational Geometry Algorithms Library [20]. To get meshes in other levels, we first perform the uniform refinement. Then we project the newest vertices onto the . In the general case, there is no explicit project map available. Hence we adopt the firstorder approximation of projection map as given in [8]. Thus, the vertices of the meshes are not on the exact surface but in an neighborhood for the second example.
We consider two different members in the family of Algorithm 1: (i) Parametric polynomial preserving recovery denoted by , a generalization of PPR method, and (ii) Parametric superconvergent patch recovery denoted by , a generalization of scheme. For the sake for simplifying the notation, we define:
where is the finite element solution, is the analytical solution and is the linear finite element interpolation of . We also remind that denotes the exact interpolation of .
5.1 Numerical example on unit sphere
We test with numerical solutions of LaplaceBeltrami equation on the unit sphere. The right hand side function is chosen to fit the exact solution . For the unit sphere, it is rather simple to generate meshes, denoted by , whose vertices are located on the exact surface. To validate the geometric supercloseness result and illustrate the performance of the isoparametric gradient recovery methods on discretized manifolds, we artificially add perturbation along normal directions at each vertices of and denote the resulting deviated mesh by . In the test example, the magnitude of the perturbation is chosen as , where is the numbering of the node.
We provide first a numerical verification of the geometric supercloseness property. We compute the maximal norm of over all elements of the above two meshes. The numerical errors are displayed in the Table 1. Clearly, the maximal norm decays at a rate of . The observed second order convergence rates are matched with the theoretical result in the Proposition 2.5.
Dof  12  42  162  642  2562  10242  40962 

1.18e02  7.07e03  2.07e03  5.40e04  1.36e04  3.42e05  8.55e06  
Order  –  0.82  1.82  1.95  1.99  2.00  2.00 
Dof  Order  Order  Order  Order  

12  1.04e+00    7.93e01  –  1.26e+00  –  1.26e+00  – 
42  7.46e01  0.53  1.14e01  3.09  6.92e01  0.96  6.92e01  0.96 
162  3.90e01  0.96  3.66e02  1.69  2.07e01  1.79  2.07e01  1.79 
642  1.97e01  0.99  1.05e02  1.82  5.44e02  1.94  5.45e02  1.94 
2562  9.90e02  1.00  2.88e03  1.87  1.39e02  1.98  1.39e02  1.97 
10242  4.95e02  1.00  7.75e04  1.89  3.49e03  1.99  3.55e03  1.97 
40962  2.48e02  1.00  2.06e04  1.91  8.78e04  1.99  9.10e04 