1 Introduction
In multiphysics applications with a partitioned approach, data often must be mapped between different meshes (keyes2013multiphysics, ). This problem is often referred to as data transfer or remap, and the two meshes are often referred to as the source (or donor) and target (or donee) mesh, respectively. In many of these applications, the data approximate a piecewise smooth function, where discontinuities (or jumps) may be present in the solutions, for example, due to shocks in highspeed fluids or multimaterial solids. In addition, nowadays the physics models are often discretized with third or higherorder methods, such as finite elements (karniadakis2013spectral, ), spectral elements (lauritzen2018ncar, ), discontinuous Galerkin (cockburn2000development, )
, etc. The combination of the discontinuities and highorder methods lead to a fundamental challenge for remap: highorder interpolations (such as splines
(de1990quasiinterpolants, )) or least squares (such as moving least squares (MLS) (wendland2001local, )) are prone to spurious oscillations near discontinuities (fornberg2007gibbs, ; jerri2013gibbs, ). In particular, discontinuities, a.k.a. (simple) jumps, often lead to oscillations (overshoots or undershoots), analogous to the GibbsWilbraham phenomenon(gibbs1899, ; wilbraham1848certain, ; jerri2013gibbs, ). In addition, discontinuities, a.k.a. derivative jumps, can lead to oscillations, which may accumulate in repeated transfer or in timedependent settings. Even some second and lower order methods, such as projection (jiao2004common, ) and radial basis function (RBF) interpolation (beckert2001multivariate, ; flyer2007transport, ; rendall2008unified, ) can suffer from such oscillations (fornberg2007gibbs, ; foster1991gibbs, ). Among the existing methods, nearestpoint or linear interpolation have no oscillation, but they do not deliver sufficient accuracy and are excessively diffusive in repeated transfer. The objective of this work is to propose a new datatransfer method that is at least thirdorder accurate for smooth functions, is nonoscillatory at discontinuities, and is as nondiffusive as possible.Resolution of the Gibbs phenomenon or similar oscillations is an important and challenging subject in numerical analysis and computational physics. There have been various methods developed for it in various contexts; see e.g. (gottlieb1997gibbs, )
. Among these, the most closely related ones are probably the socalled
reconstruction methods, such as flux reconstructions based on WENO (liu1994weighted, ; shu2009high, ) and their variants in finite volume methods. The connection is even closer in the context of hyperbolic conservation laws, in that they are time dependent. However, in the context of hyperbolic PDEs, it typically suffices for the reconstruction to resolve only simple jumps, because small oscillations due to discontinuities can be effectively controlled by the numerical diffusion in flux limiters (leveque1992numerical, ) and TVD timeintegration schemes (gottlieb1998total, ). In contrast, a general remap technique must be stable without assuming numerical diffusion in the physics models. Hence, it is important to control the oscillations due to both simple and derivative jumps in remap. This property makes remap significantly more challenging.In this work, we introduce a new method, called WLSENOR, or WeightedLeastSquaresbased Essentially NonOscillatory Remap. WLSENOR is based on the WLSENO scheme proposed by Liu and Jiao (liu2016wls, ) for the finite volume methods. The original WLSENO controls oscillations by adapting the weights in a weightedleastsquares (WLS) approximation, where the weights are approximately based on the inverse of a nonsmoothness indicator. However, the indicator in WLSENO only considers simple jumps. WLSENOR introduces a novel technique to detect both simple and derivative jumps by introducing two novel indicators and a new dualthresholding strategy. We then apply WLSENO near the detected singularities by using a new weighting scheme to take into account discontinuities. We also optimize WLSENOR for smooth functions to achieve superconvergence. As a result, WLSENOR is more than fifthorder accurate for smooth functions, nonoscillatory and minimally diffusive near and discontinuities, and is highly conservative in repeated transfers. We compare WLSENOR with some commonly used methods, including linear interpolation, RBF interpolation (beckert2001multivariate, ), projection (jiao2004common, ), and modified moving least squares (MMLS) (joldes2015modified, ; slattery2016mesh, ), and show that WLSENOR is more accurate, stable, and less diffusive in almost all cases. The presentation and results in this work focus on spherical geometries in earth and climate modeling. However, the methodology also applies to other smooth surfaces.
The remainder of this paper is organized as follows. In Section 2, we review some preliminaries on remap and reconstruction methods and the Gibbs phenomenon at discontinuities. In Section 3, we describe WLS fittings for smooth functions that exhibit superconvergence. In Section 4, we describe the detection of discontinuities in data remapping and a new weighting scheme for WLSENO for discontinuities. In Section 5, we compare WLSENOR with other methods for transferring both smooth and discontinuous functions. Finally, Section 6 concludes the paper with a discussion on future research directions.
2 Preliminaries and related work
In this section, we review some methods for remap and reconstruction, especially the highorder methods based on weightedleastsquares (WLS). In addition, we review some concepts and methods related to discontinuities, including Gibbs phenomena and WLSENO.
2.1 Remap and reconstruction methods
The remap problem has been studied extensively in the past few decades. Several software packages have been developed over the past two decades. These include those developed for climate and earth modeling (such as those in Earth System Modeling Framework (ESMF) (hill2004architecture, ), Community Earth System Model (CESM) (hurrell2013community, ) and the nextgeneration Energy Exascale Earth System Model (E3SM), including SCRIP (jones1999first, ), Model Coupling Toolkit (MCT) (larson2005model, ), OASIS (craig2017development, ), Tempest Remap (ullrich2015arbitrary, ; ullrich2016arbitrary, ), etc.), fluidstructure interaction or heat transfer (such as MpCCI (joppich2006mpcci, ) and preCICE (bungartz2016precice, )), and generalpurpose remap software (such as PANG (gander2013algorithm, ) and Data Transfer Kit (DTK) (slattery2013data, ; slattery2016mesh, )). We first review the methodology behind some of these packages, with a focus on nodetonode transfer, and then review some closely related methods for highorder reconstruction.
2.1.1 Loworder nonconservative and conservative remap
In 1990s and 2000s, datatransfer methods primarily on only first or secondorder accuracy, because the prevailing numerical discretization methods in engineering (including finite volume methods and linear finite element methods) only had loworder accuracy. One of the most primitive datatransfer methods is piecewise interpolation. This approach is particularly convenient if the source mesh has an associated finiteelement function space, in that one could use the same function space for interpolation. In this case, we refer to the approach as consistent interpolation. If the basis functions are linear (with simplicial elements) or bilinear (with quadrilateral elements), the interpolation is second order. In a finitevolume method and scatterdata interpolation, the nearestpoint interpolation is sometimes used, which is only firstorder accurate. If the basis functions are not known, some interpolation (and more precisely, quasiinterpolation (de1990quasiinterpolants, )) methods can use some alternative basis functions, such as thinplate splines (harder1972interpolation, ) or other RBF (beckert2001multivariate, ; bungartz2016precice, ; flyer2007transport, ; rendall2008unified, ).
The major disadvantage of loworder interpolation is its high diffusiveness in repeated transfer; see e.g. (de2008comparison, ). To overcome this, one approach is to enforce conservation, so that the integrals over the source and target meshes are equal. Examples of lowerorder conservative methods include the first and secondorder conservative remap (such as those in (chesshire1994scheme, ; grandy1999conservative, ; gander2013algorithm, ; jones1999first, )), commonrefinementbased projection (jiao2004common, ), etc. A commonality of these methods is that they all require computing some integrals of the function defined on the source mesh over some control volumes of each target node (or cell), and the numerical integration is computed over the intersections of the elements (or cells) of the source and target meshes. The collection of these intersections forms the common refinement (jiao2004common, ) or supermesh (farrell2009conservative, ; farrell2011conservative, ), whose computations require some sophisticated computationalgeometry algorithm for efficiency and for robustness in the presence of truncation and rounding errors (jiao2004overlayingI, ; jiao2004overlayingII, ). We also note that some loworder constrained interpolation and conservative remap methods have been developed in the context of Arbitrary LagrangianEulerian (ALE) methods, such as (bochev2005constrained, ; margolin2003second, ).
2.1.2 Highorder remap methods
Due to the increasing use of highorder discretization methods, more recent development of datatransfer methods had focused on third and higher order methods. If the source mesh uses a highorder finite element or spectral element method, one can apply consistent interpolation to achieve highorder accuracy. Such an approach may be implemented with a discretization library, such as MOAB (tautges2004moab, ; mahadevan2015sigma, ). More generally, a remap method can be independent of the discretization methods. In this context, the method may be nonconservative or conservative. Examples of the former include modified moving least square (MMLS) (joldes2015modified, ; slattery2016mesh, ), etc. An example of the latter is the conservative remap in Tempest Remap (ullrich2015arbitrary, ; ullrich2016arbitrary, ) and conservative interpolation (adam2016higher, ). Note that highorder nonconservative remap methods tend to be significantly less diffusive in repeated transfer than loworder nonconservative methods, even though conservation is not enforced explicitly; see e.g. (slattery2016mesh, ). However, because highorder methods are more prone to oscillations, special attention is needed to preserve positivity and convexity (ullrich2015arbitrary, ; ullrich2016arbitrary, ).
2.1.3 Highorder reconstructions
The remap methods, especially the nonconservative ones, are closely related to the reconstruction problem, i.e., to reconstruct a piecewise smooth function or its approximations at some discrete points on a mesh given some known qualities at discrete points or cells on the same mesh. The primary difference between remap and reconstruction is that the former involves two meshes while the latter typically involves only a single mesh. Hence, remap is in general more complicated from a combinatorial point of view. From a numerical point of view, conservation in remap is also more complicated than in reconstruction. However, nonconservative highorder remap and reconstruction share many similarities. In particular, the techniques used in highorder remap, such as spline interpolation (de1990quasiinterpolants, ), moving least square (MLS) (lancaster1981surfaces, ), and variants of MLS (known as MMLS) with regularization (joldes2015modified, ; slattery2016mesh, ), etc., originated from highorder reconstruction.
The remap method in this work utilizes two highorder reconstruction techniques, known as CMF (Continuous Moving Frames) and WALF (Weighted Average of Leastsquares Fittings) (Jiao2011RHO, ), both of which are based on weighted least squares (WLS). These techniques were first proposed for reconstructing surfaces and later adapted to reconstruct functions on surfaces (RayWanJia12, ). CMF shares some similarities with some variant of MMLS (such as that in (slattery2016mesh, )), in that CMF constructs a leastsquares fitting at each reconstruction point and it achieves accuracy and stability through some local adaptivity (instead of the global construction of the original MLS for smoothness (Lancaster1981, )). However, CMF differs from MLS in terms of the choice of the coordinate systems, stencils, and weighting schemes. WALF constructs a leastsquares fitting at each node of a mesh using CMF and then blend the fittings taking a weighted average using piecewise linear or bilinear basis functions within each element (such as a triangle or quadrilateral). In general, CMF is more accurate than WALF. For meshing applications, WALF tends to be more efficient when many points need to be evaluated within a single element. For remap in multiphysics applications, however, a transfer operator can be built in a preprocessing step and be reused in repeated transfers. Hence, In this work, we use CMF as the basis for remapping smooth regions. From the mathematical point of view, the major challenge is to overcome the potential oscillations (a.k.a. the Gibbs phenomenon) associated with CMF, for which we will leverage WALF to detect singularities.
2.2 Gibbs phenomena at and discontinuities
It is well known that discontinuities require special attentions in numerical approximations. The most studied type is the discontinuities, also known as edges (tadmor2007filters, ), (simple)jump discontinuities (cates2007detecting, ; hewitt1979gibbs, ), faults (or vertical faults) (bozzini2013detection, ; de2002vertical, ), etc. These discontinuities are the most prominent because they tend to lead to oscillations (or “ringing”), which do not vanish under mesh refinement. These oscillations were first analyzed by Wilbraham in 1848 (wilbraham1848certain, ), about 50 years before the famous analysis of the overshoots in Fourier series for sawtooth functions by Gibbs in 1899 (gibbs1899, ) in response to an observation by Michelson in 1898 (michelson1898letter, ). For this reason, such oscillations are properly referred to as the GibbsWilbraham phenomenon (hewitt1979gibbs, ), or more commonly as the Gibbs phenomenon. Besides occurring in Fourier series, the oscillations also occur in interpolation or approximation using cubic splines (richards1991gibbs, ; zhang1997convergence, ), orthogonal polynomials (gottlieb1997gibbs, ), RBF (fornberg2007gibbs, ; jung2007note, ), least squares (foster1991gibbs, ), wavelets (kelly1996gibbs, ; shim1996gibbs, ), projection (jiao2004common, ), etc. Some tight constant factors can be established in 1D for some of these problems. However, for leastsquaresbased approximations in 2D and higher dimensions, it is impractical to establish such a precise bound of the oscillations. Instead, we focus on the asymptotic effect of discontinuities, which is easy to understand from the Taylor series expansion and is sufficient for developing robust techniques to detect and resolve them in this work.
Consider a piecewise smooth function . The dimensional Taylor series of about a point is given by
(1) 
where denotes the th directional derivative along the direction , and ; see, e.g., (humpherys2017foundations, ) for a proof. The summation term in (1) defines the degree Taylor polynomial at . The last term in (1) is the remainder , which corresponds to the residual of the approximation by . If is continuously differentiable to at least th order, the residual is , i.e.,
However, if has a discontinuity in the direction at , i.e., has a jump at , then is unbounded. As a result, the approximation of by in general are expected to lead to overshoots or undershoots (or oscillations) of or larger. Note that even if is approximated by nonpolynomials, this argument based on Taylor series also applies with a simple application of triangle inequality. In particular, let be a smooth approximation of and let denote the degree Taylor polynomial of at . Then,
where the second term is under the smoothness assumption of , but the first term is or larger and it dominates the overall error. Hence, for discontinuities, we can expect to observe oscillations, even when using linear approximations, as in projection using linear polynomials.
From the Taylor series expansions, it is easy to see that overshoots can occur not only at discontinuities but also at and even discontinuities. In general, the oscillations due to discontinuities are expected to be . Theoretically, these oscillations should vanish as tends to , so they are often ignored. However, its treatment has drawn some attention recently, and it is often referred to as derivative jump discontinuities (cates2007detecting, ) or oblique (or gradient) faults (bozzini2013detection, ; de2002vertical, ). In the context of data transfer in multiphysics coupling, it is important to resolve the oscillations at discontinuities, because they may accumulate over time and lead to oscillations and even instabilities. Although these oscillations may be controlled by numerical diffusion in some numerical schemes, such as the flux limiters and TVD time integration schemes in FVM for hyperbolic conservation laws, for generality, it is desirable for the datatransfer method to be stable without relying on numerical diffusion in physics models. In addition, controlling oscillations also helps safeguarding the accumulation of the oscillations near discontinuities.
2.3 Resolution of Gibbs oscillations
2.3.1 Resolution of oscillations in 1D
Because the mathematical analysis of the Gibbs phenomenon in the literature primarily focused on 1D reconstructions, the overwhelming majority of the resolution techniques were also for 1D problems. We briefly mention two classes of classical techniques in 1D. The first class is filtering and mollification, which recovers the piecewise solutions in Fourier spaces and in physical spaces, respectively. Examples of the former include Fejer averaging, Lanczoslike filtering, Vandeven and Daubechies filters, etc., which are specific to Fourier expansions; see e.g. (zygmund1959trigonometric, ) and (jerri2013gibbs, , Chapter 2) for some surveys. Examples of the latter include spectral mollifiers based on Gegenbauer polynomials in (gottlieb1997gibbs, ; jung2004generalization, ; shizgal2003towards, ) and adaptive mollifiers in (tadmor2007filters, ). The fundamental idea of adaptive mollifiers is to detect the edges (gelb1999detection, ; gelb2000detection, ) and then apply spectral (or other) mollifiers locally. Similar ideas have been applied for approximating piecewise smooth functions in 1D using splines and moving least squares in (lipman2009approximating, ).
The second class is the Godunov and ENOtype methods. Examples include the original Godunov’s method (godunov1959difference, ), Van Leer’s MUSCL scheme (van1979towards, ), Colella and Woodward’s piecewise parabolic method (PPM) (colella1984piecewise, ), Harten’s subcellresolution method (harten1989eno, ), and some WENO reconstructions for FVM (liu1994weighted, ; jiang1996efficient, ; shu1999high, ). These methods were initially developed for the finite volume methods (FVM) for hyperbolic problems, and they typically involve reconstructing piecewise smooth functions from cell averages. Among these techniques, the ENO and WENO techniques are the most general, and their key ideas lie at the approximation level, where a nonlinear adaptive procedure is used to automatically choose the locally smoothest stencil, hence avoiding crossing discontinuities in the interpolation process as much as possible. This basic idea can be generalized to reconstructing nodal values, such as in the socalled WENO interpolation in finite difference methods (shu2009high, ). Similar to mollifiers, the WENO schemes may be applied globally without explicitly identifying the discontinuities, or be applied adaptively by coupling with some edgedetection techniques (qiu2005comparison, ).
2.3.2 Resolution of oscillations in higher dimensions
The two broad classes of 1D techniques may be applied to structured meshes in a dimensionbydimension fashion. For example, in climate modeling, Lauritzen and Nair (lauritzen2008monotone, ) developed monotonicitypreserving filters on latitude–longitude and cubedsphere grids by adapting those developed by Zerroukat et al. using monotonic parabola (zerroukat2005monotonic, ) or parabolic spline method (PSM) (zerroukat2006parabolic, ). However, when generalizing the techniques to unstructured meshes or scattered data, most of the techniques in 1D or for structured meshes encounter significant technical difficulties. However, some of the basic ideas can still be preserved on higher dimensions. Hence, we also broadly categorize the methods into two classes.
The filtering techniques in 2D and higher dimensions typically reply on postprocessing to resolve oscillations. This requires detecting the discontinuities and then adapt the kernels (i.e., basis functions) accordingly. Various techniques have been developed in the literature to detect discontinuities. The detection of discontinuities is often referred to as edge detection (archibald2008determining, ; canny1987computational, ; gelb2006adaptive, ; petersen2012hypothesis, ; romani2019edge, ; viswanathan2008detection, ; ziou1998edge, ) or troubledcell detection (qiu2005comparison, ; yang2009parameter, ), and it has many applications in signal and image processing, data compression, shock capturing in CFD, etc. These techniques typically use some indicators for the jump and then compare the indicator against some thresholds. Similarly, detecting discontinuities requires an indicator of the derivative jump followed by some thresholding. The indicators are often motivated by finite differences (such as in (bozzini2013detection, ; cates2007detecting, ; de2002vertical, )) or Taylor series expansions (such as in the polynomial annihilation edge detection in (archibald2005polynomial, ; archibald2008determining, ; saxena2009high, )). In (jung2011iterative, ), Jung et al. detected discontinuities by taking advantage of the instabilities of the expansion coefficients of multiquadric RBF at local jumps. Romani et al. (romani2019edge, ) developed some variants of the approach using variably scaled kernels (VSK) (bozzini2014interpolation, ) and Wendland’s RBF (wendland1995piecewise, )
. In terms of thresholding, they are often based on some statistical analysis (i.e., outlier detection)
(fleishman2005robust, ) or based on some double (or hysteresis) thresholding (canny1987computational, ). After the discontinuities have been detected, various reconstruction strategies have been used in practice. For reconstructions over structured grids, such as in image processing, it is relatively straightforward to adapt some limiters used in hyperbolic conservation laws to resolve discontinuities in a dimensionbydimension fashion, such as in (archibald2008determining, ; gelb2006adaptive, ). However, such an approach does not generalize to unstructured meshes easily. For unstructured meshes or scattered data points, Rossini (rossini2018interpolating, ) proposed to use VSK to resolve discontinuities, but it required the exact location of discontinuities in order to shift the radial functions.In terms of Godunov and ENOtype methods, the generalization of the loworder methods tends to be straightforward within FVM. However, for the highorder ENO and WENO schemes, the generalization to unstructured meshes is challenging. In (hu1999weighted, ), Hu and Shu proposed a WENO scheme on triangular meshes, and it was further extended to tetrahedral meshes by Zhang and Shu in (zhang2009third, ). However, some point distributions may lead to negative weights and in turn lead to potential instability. Shi, Hu and Shu (shi2002technique, ) proposed a technique to mitigate the issue, but the method had limited success due to large condition numbers of the local linear systems. In (xu2011point, ), Xu et al. proposed a hierarchical reconstruction technique for discontinuous Galerkin and finite volume methods on triangular meshes, and Luo et al. (luo2013reconstructed, ) extended the ideas to tetrahedral meshes and developed the socalled hierarchical WENO (or HWENO). In (liuzhang2013robust, ), Liu and Zhang proposed a hybrid of two different reconstruction strategies to improve stability of WENO reconstruction. In (liu2016wls, ), Liu and Jiao proposed WLSENO, which adapts the weights in a weighted least squares formulation instead of taking a convex combination of interpolations. WLSENO overcomes the stability issues associated negative weights, and it was shown to be robust in FVM and is easy to implement.
The resolution technique proposed in this work may be considered as a hybrid of adaptive mollification and ENOtype method, in that we detect and discontinuities in highorder CMF and then apply quadratic WLSENO as mollifier (or filters in the physical space) near discontinuities. Our detection technique is different from others in that it introduces two novel discontinuity indicators and a novel dual thresholding strategy to detect discontinuities. The indicators use the observation that WALF tends to be more prone to oscillations than CMF near discontinuities, so that we can detect and resolve discontinuities before oscillations manifest themselves in CMF. In addition, we will introduce a new weighting scheme for WLSENO based on one of the new discontinuity indicators.
3 Superconvergent weighted least squares (WLS)
In this section, we describe WLS remap for smooth functions on surface.
3.1 WLS with Cartesian coordinates
We first derive WLS in Cartesian coordinates using Taylor series. Consider a function at a given point , and assume its value is known at a sample of points near , where . We refer to these points as the stencil for WLS. Suppose is continuously differentiable to st order for some . We approximate to ()st order accuracy about as
(2) 
which is an alternative form of that in (1), where . Supper there are coefficients, i.e., , and assume . Let denote . We then obtain a system of equations
(3) 
for . The equation can be written in matrix form as , where is an generalized Vandermonde matrix, is composed of , and is composed of .
The generalized Vandermonde system from (3
) is rectangular, and we solve it by minimizing a weighted norm of the residual vector
, i.e.,(4) 
where is diagonal. The weighting matrix plays a fundamental role in , in that each diagonal entry in assigns a weight to each row in the generalized Vandermonde system. If both and are both nonsingular, then has no effect on the solution. However, if or is singular, then a different leads to a different solution; we will address the choice of the weights shortly. Given , we further scale the columns of with a diagonal matrix , so that the condition number of the rescaled Vandermonde matrix is improved. This process is known as column equilibration (Golub13MC, , p. 265). Then, WLS reduces to the least squares problem
(5) 
The coefficients in are then given by .
We give two important algorithmic details. First, we solve (5) using truncated QR factorization with column pivoting (QRCP) (Golub13MC, ). The QRCP is
where is with orthonormal column vectors, is an uppertriangular matrix, is a permutation matrix, and the diagonal entries in are in descending order. Second, we control the condition number of
to avoid numerical instabilities. To this end, we estimate the 1norm condition number of
(Higham87SCN, ). If the condition number is large, then we first try to enlarge the stencils and then incrementally drop the rightmost highestdegree terms in if the stencils cannot be enlarged further. These strategies add more rows to, and remove columns from, the Vandermonde systems, respectively, both of which reduce the condition number of and in turn ensure the stability of WLS.3.2 WLS on discrete surfaces
When applying WLS on surfaces, we use the same construction as in Continuous Moving Frames (CMF) for highorder surface reconstruction (Jiao2011RHO, ). We adapt CMF to remap between discrete surfaces composed of triangles and quadrilaterals. In the following, we will use WLS and CMF interchangeable, unless otherwise noted.
Given a node on a target mesh, we first construct a local coordinate frame as follows. Let denote an approximate normal at , which can be firstorder estimation by averaging face normals or be computed from the analytical surface (such as a sphere in earth modeling). Let and be an orthonormal basis of approximate tangent plan orthogonal to , which we obtain using GramSchmidt orthogonalization. The local coordinate frame is then centered at with axes and .
The construction of stencils requires some special attention in remap. Unlike reconstruction, the stencil for a target node is composed of nodes on the source mesh. To build the stencil for a target node , we first locate the triangle or quadrilateral on the source mesh that contains , and then use the union of the ring neighborhood of each node of on the source mesh. The specific choice of depends on the degree of the polynomial and the specific weighting scheme, as we will describe in Section 3.3.
We note some key difference between our WLS remap and MMLS in DTK when applied to surfaces. First, our approach uses 2D Taylor series within local coordinates in a tangent space, whereas DTK uses the global coordinate system. Note that for degree polynomials, the number of coefficients grows quadratically in 2D but cubically in 3D. Hence, it is more efficient to use higherdegree (such as quartic) fittings with CMF to achieve higher accuracy than DTK. Second, DTK is purely meshless and it finds stencils by using
nearest neighbors (a.k.a. KNN), which requires logarithmictime complexity on average to determine the stencils. In contrast, our approach uses a meshbased algorithm to compute the stencils by taking advantage of locality
(jiao1999mesh, ), so the amortized cost to determine the stencil of a node is constant.3.3 Optimizing weights for remap
As shown in (JZ08CCF, ) and (RayWanJia12, ), if the condition number of the rescaled Vandermonde system is bounded, then the function values of a smooth function can be reconstructed to on a discrete surface , where is proportional to the “radius” of the stencil. Furthermore, if the stencils are (nearly) symmetric about the origin of the local coordinate system, the reconstruction can superconverge at for evendegree due to error cancellation (JZ08CCF, ; Jiao2011RHO, ), analogous to the error cancellation in centered differences. When using CMF for remap, however, the nearsymmetry assumption is in general violated, because a target node may lie anywhere within an element of a source mesh. Hence, it is impractical to achieve superconvergence. However, with proper choice of stencils and weighting schemes, we can achieve convergence for some for even . To maximize this rate, we observe two key criteria for the weights: asymptotic rates and smoothness of the weights.
In (JZ08CCF, ), Jiao and Zha proposed to use weights that are approximately inversely proportional to the square root of the residual term associated with each node, i.e.,
(6) 
where , is the degree of the WLS, is a safeguard against division by zero, and
is a safeguard against “short circuiting” for coarse meshes. We refer to (6) as the inversedistancebased weights. These weights perform well with nearly symmetric stencils. However, they are not ideal for remap, because it has a large gradient near , especially if is small. Therefore, if the nodes closest to are highly asymmetric, which tend to be the case in remap, then there is no error cancellation for these nodes. To overcome this issue, it is desirable for the weights to be smoother about the origin, or more precisely, the first, second, and even higherorder derivatives of the weight should at the origin. This analysis led us to the choice of a new weighting scheme, or scaled Buhmann’s function,
(7) 
where and are the same as those in (6), is the cutoff radius, and
(8) 
The function is a function due to Buhmann (buhmann2001new, ), who introduced it as a compactly supported RBF by generalizing the polynomial radial functions due to Wu (wu1995compactly, ) and Wendland (wendland1995piecewise, ). Besides compactness and smoothness, these radial functions are “positive definite,” which is a desirable property for RBF (buhmann2000radial, ; buhmann2003radial, ) but is irrelevant in our context. In Section 5.1.1, we will compare our proposed scaled Buhmann weights versus other weighting functions.
In (7), the key question is how to choose the cutoff radius . For robustness, this radius must be large enough to allow sufficient number of nonzero weights in the generalized Vandermonde system. As in (JZ08CCF, ), we require the number of points in the stencil to be at least 1.5 times of that of the number of coefficients in the Taylor polynomial. Let , where . We define for some and then choose to minimize the remap errors via numerical optimization, as we will detail in Section 5.1.1. Through numerical optimization, we obtain , 1.6 and for degree2, 4, and 6 WLS, respectively. Figure 1 shows the example shapes of inversedistancebased and scaled Buhmann weights on a uniform 2D mesh for degree2, 4, and 6 WLS. For inversedistance weights, we used , where denotes the average edge length. It can be seen that due to its continuity, Buhmann’s function is much smoother at . In addition, the different cutoff radii led to different asymptotic behaviors of the scale Buhmann weights at the tails, similar to those of the inversedistancebased weights.
In terms of implementation, to select the candidate nodes in the stencils for a target node , we start with the union of rings of the nodes of the source element that contains a given target node. If there are less than points in the stencil, we enlarge the ring size in 0.5 increments as defined in (Jiao2011RHO, ). This adaptive computation of rings requires a proper mesh data structure to achieve constanttime per node, for which we use an arraybased halfedge data structure (DREJTAHF2014, ). In addition, for efficiency, we construct the transfer operator in the form of a sparse matrix, of which each row corresponds to a node in the target mesh. After obtaining this operators, the remap involves only a matrixvector multiplication.
4 Detection and resolution of and discontinuities
Like other highorder methods, WLS with scaled Buhmann weights may suffer from Gibbs phenomena near discontinuities. In this section, we extend the WLSENO approach to remap. For accuracy and efficiency, our approach first detects the discontinuous regions and then applies WLSENO to the detected discontinuities. To this end, we introduce a robust detector of discontinuities and a new weighting scheme for WLSENO.
4.1 Detection of discontinuities
We first describe a robust technique to detect discontinuities in the context of remap. Our approach is composed of four steps. First, we compute elementbased discontinuity indicators on the source mesh. Second, we convert these indicators to nodebased indicators on the source mesh. Third, we obtain nodebased discontinuity markers on the source mesh using a novel dualthresholding strategy. Finally, we transfer the nodal markers to the target mesh. In the following, we describe the four steps separately, including their derivations.
4.1.1 Elementbased indicators on source mesh
We first compute a discontinuity indicator at each element on the source mesh. While most other indicators are based on computing the differences of neighboring values, our approach is different in that it computes the difference between a secondorder interpolation and a higherorder reconstruction at the center of the element (i.e., triangle or quadrilateral). For the former, we use linear (or bilinear) interpolation within the element, which is nonoscillatory because it is a convex combination of the nodal values. For the latter, we use quadratic WALF reconstruction, which is known to be more prone to oscillations than CMF near discontinuities (Jiao2011RHO, ).
More precisely, given an element , let denote the linear/bilinear interpolation, i.e., the average of the nodal values. Let denote the quadratic WALF reconstruction at the element center, i.e., the average of the reconstructed values at the element center from the quadratic CMF fittings using 1.5ring stencil at the nodes of the element. Then, the indicator value for element is given by
(9) 
We store the operator for computing for all the elements as a sparse matrix, of which each row contains the coefficients associated with the nodes in its ring neighborhood. As for the transferoperator for smooth regions, we compute this sparse matrix a priori, so that it can be reused in repeated transfer for efficiency.
The preceding definition of is fairly straightforward, but its numerical values have two important properties. First, its sign is indicative, in that a positive and negative sign indicates a local overshoot and undershoot, respectively. Second, its magnitude has different asymptotic behavior at smooth regions and at discontinuities. In particular, let denote the exact value at the element center. For smooth functions for and , so is . Assume the function is at least within an element . Near discontinuities, , so . Near discontinuities, , so . These different asymptotic behaviors lead to clear gaps in the magnitude of the values at discontinuities, discontinuities, and at smooth regions, which will be useful for classification.
4.1.2 Nodebased indicators on source mesh
After obtaining the elementbased values, we then use them to define nodebased indicators on the same mesh. Given a node , we define the nodebased indicator as
(10) 
where denotes the average of the values in the 1ring neighborhood of , i.e., , where is the number of elements incident on . The second term in the denominator is a safeguard against division by a too small value, which may happen if the function is locally linear. In particular, denotes the global range of function over the mesh, denotes a global measure of average edge length in the coordinate system, and . The last term realmin in the denominator denotes the smallest positive floatingpoint number, which is approximately 2.2251e308 for doubleprecision floatingpoint numbers and further protects against division by zero if the input function is a constant. Note that and have the same units for smooth regions, so is nondimensional.
From a practical point of view, given the elementbased values, it is efficient to compute the nodebased values. From a numerical point of view, the definition of requires some justification. Let us assume is sufficiently nonlinear, so that . First, note that this quantity is no longer dependent on . Second, the enumerator of
captures the variance of
, and the denominator further amplifies the variance when varies in signs, which tend to occur near discontinuities. These properties are very useful in designing the thresholding strategy, which we describe next.4.1.3 Nodebased markers on source mesh via dual thresholding
After obtaining the elementbased indicators and nodebased indicators , we then use them to in a dualthresholding strategy for detecting discontinuities. In particular, we mark a node as discontinuity if it has a large value and one of its incident elements has a large value, i.e.,
where and are two thresholds. To detect both and discontinuities, we set and
(11) 
where denotes the local global range of within the ring neighborhood for WALF reconstruction, is the local average edge length in the local coordinate system, and and are the same as those in (10). and are two parameters, which we determine empirically.
We derived the thresholds based on some asymptotic analysis and numerical experimentation. First, let us focus on
. We observe that near discontinuities and near discontinuities. This is evident in Figure 2, which shows the values for two example functions (20) and (21) with , , and discontinuities on a spherical triangulation shown in Figure 8. These bounds can also be derived heuristically based on a 1D analysis, which we omit here. For robustness, we chose
, which is small enough to identify discontinuities even on relatively coarse meshes but large enough to avoiddiscontinuities on relatively fine meshes. In addition, if the function is smooth and convex (or concave), then no smooth points will be classified as discontinuities. This is because the
values in these regions would have the same signs, so . However, at saddle points or inflection points of a smooth function, the values vary in signs, which would lead to a nearly zero denominator in (10) and in turn a large value. This is partially the reason why was large in the middle of Figure 2(b). Hence, using alone may lead to false positiveness, which could activate the resolution of discontinuities at inflection points and potentially reduce accuracy and efficiency.To address the potential false positiveness, we introduced in (11) as a second threshold. The local threshold in (11) is based on the Taylor series analysis in Section 4.1.1: In smooth regions, , so . This term separates the values at smooth regions, which are , from those at and discontinuities, which are and , respectively. The use of (instead of ) makes to be independent of the scaling of the geometry and be insensitive to the nonuniformity of the mesh. Similarly, the use of the local range makes independent of scaling of . However, if the function is locally nearly a constant, then , so would fail to filter out the false positiveness in such cases. In (11), is a safeguard for such cases, where on quasiuniform meshes. In terms of the constant values and , too large values can result in false negativeness, and too small values can lead to some false positiveness. For robustness, we chose and , which are large enough to avoid virtually all false positiveness on sufficiently fine meshes without introducing false negativeness.
4.1.4 Nodebased markers on target mesh
After obtaining the nodebased markers, it is then straightforward to map them to nodal markers on the target mesh. In particular, given a target node , if any of the source node in its stencil is marked as discontinuities, we mark as a discontinuity on the target mesh. This step further improves the robustness of the detection step, in that some isolated false negativeness on the source mesh can be corrected if any of their neighbor nodes is marked correctly.
With a robust marker of discontinuities, one can apply various limiters near discontinuities. For example, to preserve monotonicity near discontinuities, we can bound the solution at a target node near discontinuities to be bounded by the local extreme values of its containing element on the source mesh. However, using such a limiter alone is insufficient by itself. It is desirable to adapt the weighting scheme near the detected discontinuities.
4.2 New weighting scheme for WLSENO
After detecting the discontinuous regions, we resolve the potential Gibbs phenomena from WLS with scaled Buhmann weights. Our basic idea is motivated by WLSENO in (liu2016wls, ), which adapts the weights to deprioritize the nodes that are on the other side of discontinuities. We propose a modification to the weighting scheme to take into account the elementbased values. In particular, for the th node in the stencil of node on the target mesh, we define the weight to be
(12) 
where denotes the function value at the node in the source mesh, denotes the linear interpolation at node from the source mesh, and serves a safeguard against division by a too small value. The scaling by the different powers of in the denominator consistent units of all the terms for smooth functions, so that the weighting scheme is invariant of shifting and scaling of the input function. We use and let be the average distance length in the local coordinate system. In the enumerator, , , and are the same as those in the inversedistance based weight (6).
The definition of was guided by an asymptotic analysis, which we outline as follows. Without loss of generality, let us first assume the enumerator is . If and , then is essentially the same as that in (liu2016wls, ) for hyperbolic conservation laws, except that we use linear interpolation to obtain in remap. With those parameters, , and at discontinuities, discontinuities, and smooth regions, respectively, so it could not distinguish discontinuities from smooth regions. When , however, since at discontinuities, , and at discontinuities, discontinuities, and smooth regions, respectively. Note that the denominator of (12) does not take into account distance or normals. As in (6), serves as a safeguard for sharp features or underresolved highcurvature regions in surface meshes. The inversedistance part of the enumerate introduces an term, which would still allow to distinguish discontinuities from smooth regions while reducing the influence of faraway points and in turn reduce the potential interference of nearby discontinuities.
To obtain the parameters in , we conducted numerical experimentation and found that and worked well in practice. For robustness and efficiency, we use quadratic WLSENO near discontinuities with rings, same as the ring size for quartic WLSENO in smooth regions. With these parameters, our proposed WLSENO scheme is relatively insensitive to false positiveness of detected discontinuities. This is because even if all the nodes are marked as discontinuities, the WLSENO weights (12) with quadratic WLS would be used, and the method is still thirdorder accurate for smooth functions, which is still higher than commonly used loworder methods.
5 Numerical experimentation
In this section, we report numerical experimentation with WLSENOR. Since one of our main targeted application areas is climate and earth modeling, we focus on on nodetonode transfer between Delaunay triangulations and cubedsphere meshes of the unit sphere. The former type is the dual of the spherical centroidal Voronoi tessellations (SCVT) (ju2011voronoi, ), which are commonly used with finite volume methods in ocean modeling. In this setting, the nodal values in the Delaunay mesh are equivalent to cellcentered values in the Voronoi tessellation. The latter type is often used in finite volume (putman2007finite, ) or spectral element methods (terai2018atmospheric, ) in atmospheric models. We assess WLSENOR for both smooth and discontinuous functions under different mesh resolutions. To this end, we generated a series these meshes of different resolutions using Ju’s SCVT code (ju2011voronoi, ) and equidistant gnomonic projection of sphere (sadourny1972conservative, ), respectively. Figure 3 shows the coarsest meshes used in our tests, which we refer to as the level1 meshes. The Delaunay mesh has 4,096 nodes and 8,188 triangles, and the cubedsphere mesh has 1,016 nodes and 1,014 quadrilaterals, respectively. For each level of mesh refinement, the number of triangles or quadrilaterals quadruples.
5.1 Transferring smooth functions
We first consider WLSbased transfer for smooth functions. We report some representative results with two functions, namely,
(13) 
and
(14) 
The latter is the real part of an unnormalized version of the degree spherical harmonic function (wiki:sphereical_harmonics, ), i.e.,
(15) 
where ] and are the polar (colatitudinal) and the azimuthal (longitudinal) angles, respectively. For smooth functions, we consider the norm error. Let denote the vector composed of the pointwise error for each node of a mesh, and let denote the number of nodes. The norm error is measured as
(16) 
Given two meshes of different resolutions with and nodes, where , and let and denote their respective error vectors. Assuming the meshes are uniformly, which is the case in our tests, we evaluate the convergence rate in norm between the two meshes as
(17) 
which is equivalent to the standard definition based on edge length. Although we recommend quartic WLS in smooth regions, we will report results with quadratic, quartic, and sextic WLS for completeness and for comparison.
5.1.1 Numerical optimization of cutoff radii
One of the key aspect of our WLS transfer is its weighting scheme, namely the scaled Buhmann weights described in Section 3.3. Here, we describe our procedure to optimize the cutoff radius in these weights. Our goal is to minimize the norm error of the transferred solution with respect to . It is difficult, if not impossible, to write down an equation for this objective function, because depends on the combinatorial structure of the mesh. Fortunately, a precise is unnecessary, so we solve this optimization problem approximately by using the two example and in (13) and (14) and the Delaunay and cubedsphere meshes depicted in Figure 3. We transferred the functions between the meshes for between 1 and 3 with an increment of . Figure 4 shows the norm errors of degree2, 4, and 6 WLS in transferring from the Delaunay to the cubedsphere mesh. It is evident that the error profile had a “V” shape with respect to for both functions, and the minima were at , and for degree2, 4, and 6 WLS, respectively. As a reference, Figure 4 also shows the errors with the inversedistance based weights (6). The scaled Buhmann weights with optimal improved the accuracy by nearly two orders of magnitude compared to using (6). It is also worth noting that the optimal values remained about the same when we transferred other smooth functions, transferred from the cubedsphere to the Delaunay meshes, or used different mesh resolution. This shows that the numerical optimization is well posed for smooth functions with a sufficiently smooth weight function.
We also applied the optimization procedure to other radial functions, including those in (wu1995compactly, ) and (wendland1995piecewise, ). We observed that the wellposedness of our optimization procedure required at least continuity. For comparison, Figure 4 also shows the error profiles with the following two weighting schemes:
(18) 
and
(19) 
The radial functions and are functions due to Wu (wu1995compactly, ) and Wendland (wendland1995piecewise, ), respectively. It is clear that the scaled Buhmann weights had the smallest optimal cutoff radii, which result in smaller constant factors in the leading error term and also fewer rows in the generalized Vandermonde systems. The functions of Wu and Wendland required even larger cutoff radii. Hence, the scaled Buhmann weights are the overall winners in both accuracy and efficiency for WLSbased remapping. We note that DTK contains a collection of weights, which were mostly based on Wu’s and Wendland’s functions, and DTK requires the user to choose the specific weights and the cutoff radius (slattery2016mesh, ). In contrast, our approach is parameterfree from the user’s perspective, in that both the weighting functions and cutoff radii were preoptimized.
5.1.2 Accuracy and convergence
We first assess the accuracy and convergence of WLS for smooth functions in comparison with several other datatransfer methods, including linear interpolation, commonrefinementbased projection (jiao2004common, ), MMLS in DTK (slattery2016mesh, ), and RBF interpolation (beckert2001multivariate, ). For MMLS, we used the same parameters as recommended in (slattery2016mesh, ), with Wu’s radial function as weights and a cutoff radius of about five times the maximum edge length on the source mesh. For RBF interpolation, we used the implementation in DTK as described in (slattery2016mesh, ). However, we adhered to the original formulation of Beckert and Wendland in (beckert2001multivariate, ) with Wendland’s radial functions (instead of using Wu’s functions as in (slattery2016mesh, )). We set the cutoff radius for RBF to be five times the maximum edge length on both source and target meshes, which was significantly larger than the recommendation in (beckert2001multivariate, ) but was necessary for the solution to be stable and reasonably accurate. For convergence study, we used four sets of triangular and quadrilateral meshes with different resolutions. Figure 5 shows the norm errors of each of these methods together with quadratic, quartic, and sextic WLS for functions and . For completeness, we also show the result with cubic WLS, for which we used its optimal radius ratio . The number associated with each line segment indicates the convergence rate for its corresponding method under the corresponding mesh refinement.
We make several observations from the results. First, both projection and linear interpolation achieved secondorder accuracy. Note that on surface meshes, projection involves geometric errors in its integration, so its accuracy was slightly worse that that of linear interpolation, unlike the 2D results in (jiao2004common, ). The RBF interpolation of Beckert and Wendland produced smaller errors than linear interpolation and projection, but its overall convergence rate was close to zero. Note that MMLS in DTK uses quadratic polynomials in the global coordinate system, which have the same number of coefficients as cubic WLS in the local coordinate. As a result, MMLS performed slightly better than quadratic WLS on the three coarser meshes but worse than cubic WLS. More importantly, DTK lost convergence on the finest mesh, because the nearly planar stencils on fine meshes lead to nearly singular Vandermonde systems. Although DTK uses truncated SVD (TSVD) to resolve illconditioning algebraically, the convergence was lost due to the random truncation of lowdegree terms by TSVD. In contrast, we use local 2D coordinate systems within the local tangent space and use QRCP to solve the linear system, and we truncate the highestdegree terms in the presence of illconditioning. Hence, WLS is both stable and accurate, even when using degree6 polynomials, of which the solution achieved machine precision on the finest mesh.
5.1.3 Accuracy and conservation in repeated transfer
The preceding section focused on accuracy and convergence in a singlestep transfer. In multiphysics applications, data must be exchanged between meshes repeated. Hence, it is important to take into account the accuracy and convergence in such a setting. Figure 6 compared the accuracy when transferring smooth functions back and forth for 1000 times between the Delaunay and cubedsphere meshes. Because MMLS was unstable and RBF was computationally too expensive on the finest mesh, we only report results on the three coarser meshes. Quartic and sextic WLS clearly stand out in their accuracy. Among the other methods, linear interpolation became far less accurate in repeated transfer. RBF interpolation performed better than linear interpolation, but its convergence rate was close to zero. projection was remarkably accurate compared to all the other third or lowerorder methods, especially on coarse meshes, due to its conservation property. Note that MMLS exhibited better accuracy than quadratic WLS but similar accuracy as cubic WLS in repeated transfer. This is because MMLS with quadratic polynomials in the global coordinate systems has the same number of monomial basis functions as WLS with cubic polynomials in the local coordinate system.
Besides accuracy, another important consideration for repeated transfer is conservation (de2008comparison, ; jiao2004common, ). Unfortunately, the conservation error is not uniquely defined, especially for highorder methods. In this work, we measure the conservation error as the difference between the integral of the exact function and that of final reconstructed function. We use numerical quadrature to compute the integrals the exact spherical geometry, where we evaluated the values at the quadrature points using the samedegree of WLS for quartic and sextic WLS and using quadratic WLS reconstruction for all the other methods. Figure 7 compares the conservation errors on the level2 and level3 meshes for . Linear interpolation performed the worst due to its loworder accuracy and nonconservation. projection was not strictly conservative primarily because its numerical integration is inexact on spheres, but it was nevertheless more conservative than MMLS and WLS2, although less conservative than WLS3. RBF interpolation was competitive with projection in terms of conservation, despite its lower accuracy. Quartic and sextic WLS were more conservative than projection on surfaces, although they do not enforce conservation explicitly.
5.2 Transferring discontinuous functions
To assess the treatment of discontinuities, we use two piecewise smooth functions, namely
(20) 
and
(21) 
with . Function is an axialsymmetric function with both and discontinuities in the direction, which are similar to an intermediate solution of two interacting blast waves (jiang1996efficient, ). Hence, we refer to as interacting waves. Function has a richer structure of discontinuities, including , , and discontinuities in the direction, which intersect with the discontinuities in the direction. Hence, we refer to as crossing waves. The range of is between and 1, and that of is approximately between and . The disparate ranges allow us to demonstrate the independence of scaling and shifting of the function values of our method. Figure 8 shows these functions on the level4 Delaunay mesh. Figure 9 shows the detected discontinuities on the level4 cubedsphere mesh, computed from the indicators on the Delaunay mesh. All the and discontinuities were detected correctly on this mesh. We note that they were also detected correctly on the coarser meshes. These results demonstrate the low (nearly zero) falsenegative rate of our detection technique. In addition, our technique also has a low falsepositive rate, which is evident in Figure 9, where the discontinuities in were not marked. We also tested our discontinuity indicators for the smooth functions and on all the meshes, and no false discontinuity was marked on any of the meshes. Hence, our detection technique is robust in terms of both falsenegative and falsepositive rates.
5.2.1 Justification of WLSENO degree and weights
In WLSENOR, we use quartic WLS in smooth regions and quadratic WLSENO near discontinuities. We refer to this combination as WLSENOR(4,2). As we have shown in Section 5.1, quartic WLS delivers high accuracy and conservation for smooth functions. We now justify the use of quadratic polynomials by comparing it with WLSENOR(4,1) and WLSENOR(4,3), i.e., using quartic WLS in smooth regions but linear or cubic WLSENO near discontinuities. For WLSENOR(,), we used rings for smooth regions and rings for discontinuities. We transferred back and forth between the level3 Delaunay and cubedsphere meshes. Figure 10 shows the results after 500 and 1000 iterations along a great circle passing through the poles, where the dotted solid line shows the “exact” solution. It is clear that WLSENOR(4,1) is overly diffusive at the global minimum, although it worked well at other discontinuities. On the other hand, WLSENOR(4,3) is less diffusive than WLSENOR(4,2) near discontinuities, but it had a significant undershoot at the global minimum where two discontinuities interact. Note that this undershoot would vanish on the level4 meshes, indicating that WLSENOR(4,3) requires finer meshes than WLSENOR(4,2) to separate discontinuities. Overall, WLSENOR(4,2) delivered a good balance between robustness for coarser meshes and accuracy on finer meshes, so we use it as the default.
In WLSENOR, we introduced a new weighting scheme (12), which takes into account discontinuities and controls it using the parameter . To demonstrate its importance, Figure 11 compares WLSENOR with and without enabled after 500 and 1000 steps in repeated transfer of on the level4 mesh. It can be seen that when not considering the discontinuities (i.e., ), the solution was less controlled near discontinuities. With our default nonzero , the remap is more robust, and the solution approximates the exact solution very well even after 1000 steps of repeated transfer.
5.2.2 Gibbs phenomena in onestep transfer
We now compare WLSENOR with some other methods in transferring discontinuous function. In particular, we compare WLSENO(4,2) with the same methods used in Section 5.1 by transferring from the level3 Delaunay to the level3 cubedsphere mesh. Figure 12 shows the result on the cubedsphere mesh along a great circle passing through the poles. We plot WLSENO together with linear interpolation in Figure 12(a), because both are nonoscillatory. As can be seen in Figure 12(b–d), MMLS had visible overshoots and undershoots, although they were less severe than those of projection and RBF interpolation.
5.2.3 Gibbs phenomena and diffusion in repeated transfer
We now compare repeated transfer of discontinuous functions with the least oscillatory methods in the preceding section, namely, WLSENOR, linear interpolation, and MMLS; we refer readers to (slattery2016mesh, ) for some comparison of MMLS, projection, and RBF interpolation for repeated transfer of discontinuous functions. Figure 13 shows the results on the level3 meshes after 500 and 1000 steps of repeated transfer between the level3 Delaunay and cubedsphere meshes for function . WLSENOR performed the best in terms of both preserving monotonicity and minimizing diffusion. In contrast, linear interpolation was excessively diffusive, whereas MMLS was oscillatory and had a severe undershoot at the global minimum and a noticeable overshoot at the global maximum.
Finally, we report a comparison of WLSENOR using different degree of polynomials in smooth regions. Figure 14 shows the results with quadratic, quartic, and sextic WLS (i.e., WLSENOR(,2) with , 4, and 6) after 1000 steps of repeated transfer of function on the level2 and level4 meshes. On the level2 mesh, quadratic WLS was significantly more diffusive than the others near the global minimum, where two discontinuities interact. It also had some overshoots near discontinuities. This is because quadratic WLS does not have sufficient accuracy on coarser meshes. Quartic and sextic WLS produced similar results in smooth regions and near discontinuities. However, sextic WLS is more expensive, and it was more diffusive than quartic WLS on the level2 mesh, This is because sextic WLS requires about 20 cells between discontinuities to avoid interference from each other, but quartic WLS requires about 16 cells. These results show that WLSENOR(4,2) strikes a good balance between accuracy, efficiency, and robustness.
6 Conclusions
In this paper, we proposed a new method, called WLSENO Remap or WLSENOR in short, for transferring data between nonmatching meshes. The method utilizes quartic WLS reconstruction in smooth regions, and it selectively applies quadratic WLSENO reconstruction near discontinuities. For smooth functions, we proposed a new weighting scheme based on Buhmann’s radial function and optimized it to enable superconvergence of WLS remap for smooth functions. For discontinuous functions, we proposed a robust technique to detect and discontinuities with two novel indicators and a new dualthresholding strategy. We also proposed a new weighting scheme for WLSENO to take into account discontinuities for robustness. Overall, WLSENOR achieves higher than fifthorder accuracy and highly conservative in smooth regions, and is nonoscillatory and minimally diffusive near discontinuities. It compares favorably with other commonly used nodetonode remap techniques in accuracy, stability, conservation, and monotonicitypreservation for almost all of our test cases.
In this work, we primarily focused on nodetonode transfer on spheres, which is most relevant to climate and earthsystem modeling. We plan to extend the work to address cellaveraged data, which are commonly used in finite volume methods. Future work will also include extending our method to support more general surfaces, including those with geometric discontinuities, such as ridges and corners, as well as those with boundaries.
Acknowledgments
This work was supported under the Scientific Discovery through Advanced Computing (SciDAC) program in the US Department of Energy’s Office of Science, Office of Advanced Scientific Computing Research through subcontract #462974 with Los Alamos National Laboratory.
References
 [1] A. Adam, D. Pavlidis, J. R. Percival, P. Salinas, Z. Xie, F. Fang, C. C. Pain, A. Muggeridge, and M. D. Jackson. Higherorder conservative interpolation between controlvolume meshes: Application to advection and multiphase flow problems with dynamic mesh adaptivity. J. Comput. Phys., 321:512–531, 2016.
 [2] R. Archibald, A. Gelb, and J. Yoon. Polynomial fitting for edge detection in irregularly sampled signals and images. SIAM J. Numer. Ana., 43(1):259–279, 2005.
 [3] R. Archibald, A. Gelb, and J. Yoon. Determining the locations and discontinuities in the derivatives of functions. Appl. Numer. Math., 58(5):577–592, 2008.
 [4] A. Beckert and H. Wendland. Multivariate interpolation for fluidstructureinteraction problems using radial basis functions. Aerosp. Sci. Technol., 5(2):125–134, 2001.
 [5] P. Bochev and M. Shashkov. Constrained interpolation (remap) of divergencefree fields. Comput. Meth. Appl. Mech. Engrg., 194(25):511–530, 2005.
 [6] M. Bozzini, L. Lenarduzzi, M. Rossini, and R. Schaback. Interpolation with variably scaled kernels. IMA J. Numer. Ana., 35(1):199–219, 2014.
 [7] M. Bozzini and M. Rossini. The detection and recovery of discontinuity curves from scattered data. J. Comput. Appl. Math., 240:148–162, 2013.
 [8] M. Buhmann. A new class of radial basis functions with compact support. Math. Comput., 70(233):307–318, 2001.
 [9] M. D. Buhmann. Radial basis functions. Acta Numer., 9:1–38, 2000.
 [10] M. D. Buhmann. Radial Basis Functions: Theory and Implementations, volume 12. Cambridge University Press, 2003.
 [11] H.J. Bungartz, F. Lindner, B. Gatzhammer, M. Mehl, K. Scheufele, A. Shukaev, and B. Uekermann. preCICE – a fully parallel library for multiphysics surface coupling. Comput. Fluids, 141:250–258, 2016.

[12]
J. Canny.
A computational approach to edge detection.
In
Readings in Computer Vision
, pages 184–203. Elsevier, 1987.  [13] D. Cates and A. Gelb. Detecting derivative discontinuity locations in piecewise continuous functions from fourier spectral data. Numer. Algo., 46(1):59–84, 2007.
 [14] G. Chesshire and W. D. Henshaw. A scheme for conservative interpolation on overlapping grids. SIAM J. Sci. Comput., 15(4):819–845, 1994.
 [15] B. Cockburn, G. E. Karniadakis, and C.W. Shu. The Development of Discontinuous Galerkin Methods. Springer, 2000.
 [16] P. Colella and P. R. Woodward. The piecewise parabolic method (PPM) for gasdynamical simulations. J. Comput. Phys., 54(1):174–201, 1984.
 [17] A. Craig, S. Valcke, and L. Coquart. Development and performance of a new version of the OASIS coupler, OASIS3MCT_3. 0. Geosci. Model Dev., 10(9):3297–3308, 2017.
 [18] A. de Boer, A. H. van Zuijlen, and H. Bijl. Comparison of conservative and consistent approaches for the coupling of nonmatching meshes. Comput. Methods Appl. Mech. Eng., 197(4950):4284–4297, 2008.
 [19] C. de Boor. Quasiinterpolants and approximation power of multivariate splines. In Computation of curves and surfaces, pages 313–345. Springer, 1990.
 [20] M. C. L. de Silanes, M. C. Parra, and J. J. Torrens. Vertical and oblique fault detection in explicit surfaces. J. Comput. Appl. Math., 140(12):559–585, 2002.
 [21] V. Dyedov, N. Ray, D. Einstein, X. Jiao, and T. Tautges. AHF: Arraybased halffacet data structure for mixeddimensional and nonmanifold meshes. In J. Sarrate and M. Staten, editors, Proceedings of the 22nd International Meshing Roundtable, pages 445–464. Springer International Publishing, 2014.
 [22] P. Farrell and J. Maddison. Conservative interpolation between volume meshes by local Galerkin projection. Comput. Meth. Appl. Mech. Engrg., 200(14):89–100, 2011.
 [23] P. Farrell, M. Piggott, C. Pain, G. Gorman, and C. Wilson. Conservative interpolation between unstructured meshes via supermesh construction. Comput. Meth. Appl. Mech. Engrg., 198(3336):2632–2642, 2009.
 [24] S. Fleishman, D. CohenOr, and C. T. Silva. Robust moving leastsquares fitting with sharp features. ACM Trans. Graph., 24(3):544–552, 2005.
 [25] N. Flyer and G. B. Wright. Transport schemes on a sphere using radial basis functions. J. Comput. Phys., 226(1):1059–1084, 2007.
 [26] B. Fornberg and N. Flyer. The Gibbs phenomenon for radial basis functions. The Gibbs Phenomenon in Various Representations and Applications, pages 201–224, 2007.
 [27] J. Foster and F. Richards. The Gibbs phenomenon for piecewiselinear approximation. Amer. Math. Monthly, 98(1):47–49, 1991.
 [28] M. J. Gander and C. Japhet. Algorithm 932: PANG: software for nonmatching grid projections in 2D and 3D with linear complexity. ACM Trans. Math. Softw., 40(1):6, 2013.
 [29] A. Gelb and E. Tadmor. Detection of edges in spectral data. Appl. Comput. Harmon. Anal., 7(1):101–135, 1999.
 [30] A. Gelb and E. Tadmor. Detection of edges in spectral data ii. nonlinear enhancement. SIAM J. Numer. Anal., 38(4):1389–1408, 2000.
 [31] A. Gelb and E. Tadmor. Adaptive edge detectors for piecewise smooth data based on the minmod limiter. J. Sci. Comput., 28(23):279–306, 2006.
 [32] J. W. Gibbs. Letter to the editor. Nature, page 200 and 606, 1899.
 [33] S. K. Godunov. A difference method for numerical calculation of discontinuous solutions of the equations of hydrodynamics. Mat. Sb., 89(3):271–306, 1959.
 [34] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins, 4th edition, 2013.
 [35] D. Gottlieb and C.W. Shu. On the Gibbs phenomenon and its resolution. SIAM Rev., 39(4):644–668, 1997.
 [36] S. Gottlieb and C.W. Shu. Total variation diminishing RungeKutta schemes. Math. Comput., 67(221):73–85, 1998.
 [37] J. Grandy. Conservative remapping and region overlays by intersecting arbitrary polyhedra. J. Comput. Phys., 148(2):433–466, 1999.
 [38] R. L. Harder and R. N. Desmarais. Interpolation using surface splines. J. Aircraft, 9(2):189–191, 1972.
 [39] A. Harten. ENO schemes with subcell resolution. J. Comput. Phys., 83(1):148–184, 1989.
 [40] E. Hewitt and R. E. Hewitt. The GibbsWilbraham phenomenon: an episode in Fourier analysis. Arch. Hist. Exact Sci., 21(2):129–160, 1979.
 [41] N. J. Higham. A survey of condition number estimation for triangular matrices. SIAM Rev., 29:575–596, 1987.
 [42] C. Hill, C. DeLuca, M. Suarez, A. Da Silva, et al. The architecture of the earth system modeling framework. Comput. Sci. Engrg., 6(1):18, 2004.
 [43] C. Hu and C.W. Shu. Weighted essentially nonoscillatory schemes on triangular meshes. J. Comput. Phys., 150(1):97–127, 1999.
 [44] J. Humpherys, T. J. Jarvis, and E. J. Evans. Foundations of Applied Mathematics, Volume I: Mathematical Analysis. SIAM, 2017.
 [45] J. W. Hurrell, M. M. Holland, P. R. Gent, S. Ghan, J. E. Kay, P. J. Kushner, J.F. Lamarque, W. G. Large, D. Lawrence, K. Lindsay, et al. The community earth system model: a framework for collaborative research. B. Am. Meterol. Soc., 94(9):1339–1360, 2013.
 [46] A. J. Jerri. The Gibbs Phenomenon in Fourier Analysis, Splines and Wavelet Approximations, volume 446. Springer, 2013.
 [47] G.S. Jiang and C.W. Shu. Efficient implementation of weighted ENO schemes. J. Comput. Phys., 126(1):202–228, 1996.
 [48] X. Jiao, H. Edelsbrunner, and M. T. Heath. Mesh association: formulation and algorithms. In International Meshing Roundtable, pages 75–82. Citeseer, 1999.
 [49] X. Jiao and M. T. Heath. Commonrefinementbased data transfer between nonmatching meshes in multiphysics simulations. Int. J. Numer. Meth. Engrg., 61(14):2402–2427, 2004.
 [50] X. Jiao and M. T. Heath. Overlaying surface meshes, part I: Algorithms. Int. J. Comput. Geom. Appl., 14(06):379–402, 2004.
 [51] X. Jiao and M. T. Heath. Overlaying surface meshes, part II: Topology preservation and feature matching. Int. J. Comput. Geom. Appl., 14(06):403–419, 2004.
 [52] X. Jiao and D. Wang. Reconstructing highorder surfaces for meshing. Engrg. Comput., 28:361–373, 2012.
 [53] X. Jiao and H. Zha. Consistent computation of first and secondorder differential quantities for surface meshes. In ACM Solid and Physical Modeling Symposium, pages 159–170. ACM, 2008.
 [54] G. R. Joldes, H. A. Chowdhury, A. Wittek, B. Doyle, and K. Miller. Modified moving least squares with polynomial bases for scattered data approximation. Appl. Math. Comput., 266:893–902, 2015.
 [55] P. W. Jones. First and secondorder conservative remapping schemes for grids in spherical coordinates. Mon. Weather Rev., 127(9):2204–2210, 1999.
 [56] W. Joppich and M. Kürschner. MpCCI–a tool for the simulation of coupled applications. Concurr. Comp.Pract. E., 18(2):183–192, 2006.
 [57] L. Ju, T. Ringler, and M. Gunzburger. Voronoi tessellations and their application to climate and global modeling. In Numerical Techniques for Global Atmospheric Models, pages 313–342. Springer, 2011.
 [58] J.H. Jung. A note on the Gibbs phenomenon with multiquadric radial basis functions. Appl. Numer. Math., 57(2):213–229, 2007.
 [59] J.H. Jung, S. Gottlieb, and S. O. Kim. Iterative adaptive RBF methods for detection of edges in twodimensional functions. Appl. Numer. Math., 61(1):77–91, 2011.
 [60] J.H. Jung and B. D. Shizgal. Generalization of the inverse polynomial reconstruction method in the resolution of the Gibbs phenomenon. J. Comput. Appl. Math., 172(1):131–151, 2004.
 [61] G. Karniadakis and S. Sherwin. Spectral/hp Element Methods for Computational Fluid Dynamics. Oxford University Press, 2013.
 [62] S. E. Kelly. Gibbs phenomenon for wavelets. Appl. Comput. Harmon. Anal., 3(1):72–81, 1996.
 [63] D. E. Keyes, L. C. McInnes, C. Woodward, W. Gropp, E. Myra, M. Pernice, J. Bell, J. Brown, A. Clo, J. Connors, et al. Multiphysics simulations: challenges and opportunities. Int. J. High Perform. Comput. Appl., 27(1):4–83, 2013.
 [64] P. Lancaster and K. Salkauskas. Surfaces generated by moving least squares methods. Math. Comput., 37(155):141–158, 1981.
 [65] P. Lancaster and K. Salkauskas. Surfaces Generated by Moving Least Squares Methods. Mathematics of Computation, 37(155):141 – 158, 1981.
 [66] J. Larson, R. Jacob, and E. Ong. The model coupling toolkit: a new Fortran90 toolkit for building multiphysics parallel coupled models. Int. J. High Perform. Comput. Appl., 19(3):277–292, 2005.
 [67] P. H. Lauritzen and R. D. Nair. Monotone and conservative cascade remapping between spherical grids (cars): Regular latitude–longitude and cubedsphere grids. Mon. Weather Rev., 136(4):1416–1432, 2008.
 [68] P. H. Lauritzen, R. D. Nair, A. Herrington, P. Callaghan, S. Goldhaber, J. Dennis, J. Bacmeister, B. Eaton, C. Zarzycki, M. A. Taylor, et al. NCAR release of CAMSE in CESM2.0: A reformulation of the spectral element dynamical core in drymass vertical coordinates with comprehensive treatment of condensates and energy. J. Adv. Model Earth Sy., 10(7):1537–1570, 2018.
 [69] R. J. LeVeque. Numerical Methods for Conservation Laws, volume 132. Springer, 1992.
 [70] Y. Lipman and D. Levin. Approximating piecewisesmooth functions. IMA J. Numer. Ana., 30(4):1159–1183, 2009.
 [71] H. Liu and X. Jiao. WLSENO: Weightedleastsquares based essentially nonoscillatory schemes for finite volume methods on unstructured meshes. J. Comput. Phys., 314:749–773, 2016.
 [72] X.D. Liu, S. Osher, and T. Chan. Weighted essentially nonoscillatory schemes. J. Comput. Phys., 115(1):200–212, 1994.
 [73] Y. Liu and Y.T. Zhang. A robust reconstruction for unstructured WENO schemes. J. Sci. Comput., 54(2,3):603–621, 2013.
 [74] H. Luo, Y. Xia, S. Spiegel, R. Nourgaliev, and Z. Jiang. A reconstructed discontinuous Galerkin method based on a hierarchical WENO reconstruction for compressible flows on tetrahedral grids. J. Comput. Phys., 236:477–492, 2013.
 [75] V. Mahadevan, I. R. Grindeanu, N. Ray, R. Jain, and D. Wu. SIGMA release v1. 2capabilities, enhancements and fixes. Technical report, Argonne National Laboratory (ANL), Argonne, IL (United States), 2015.
 [76] L. Margolin and M. Shashkov. Secondorder signpreserving conservative interpolation (remapping) on general grids. J. Comput. Phys., 184(1):266–298, 2003.
 [77] A. A. Michelson. Letter to the editor. Nature, 58:544–545, 1898.
 [78] A. Petersen, A. Gelb, and R. Eubank. Hypothesis testing for Fourier based edge detection methods. J. Sci. Comput., 51(3):608–630, 2012.
 [79] W. M. Putman and S.J. Lin. Finitevolume transport on various cubedsphere grids. J. Comput. Phys., 227(1):55–78, 2007.
 [80] J. Qiu and C.W. Shu. A comparison of troubledcell indicators for Runge–Kutta discontinuous Galerkin methods using weighted essentially nonoscillatory limiters. SIAM J. Sci. Comput., 27(3):995–1013, 2005.
 [81] N. Ray, D. Wang, X. Jiao, and J. Glimm. Highorder numerical integration over discrete surfaces. SIAM J. Numer. Ana., 50:3061–3083, 2012.
 [82] T. Rendall and C. Allen. Unified fluid–structure interpolation and mesh motion using radial basis functions. Int. J. Numer. Meth. Engrg., 74(10):1519–1559, 2008.
 [83] F. Richards. A Gibbs phenomenon for spline functions. J. Approx. Theory, 66(3):334–351, 1991.
 [84] L. Romani, M. Rossini, and D. Schenone. Edge detection methods based on RBF interpolation. J. Comput. Appl. Math., 349:532–547, 2019.
 [85] M. Rossini. Interpolating functions with gradient discontinuities via variably scaled kernels. Dolomites Research Notes on Approximation, 11(2), 2018.
 [86] R. Sadourny. Conservative finitedifference approximations of the primitive equations on quasiuniform spherical grids. Mon. Weather Rev., 100(2):136–144, 1972.
 [87] R. Saxena, A. Gelb, and H. Mittelmann. A high order method for determining the edges in the gradient of a function. Comm. Comput. Phys., 5(24):694–711, 2009.
 [88] J. Shi, C. Hu, and C.W. Shu. A technique of treating negative weights in WENO schemes. J. Comput. Phys., 175(1):108–127, 2002.
 [89] H.T. Shim and H. Volkmer. On the Gibbs phenomenon for wavelet expansions. J. Approx. Theory, 84(1):74–95, 1996.
 [90] B. D. Shizgal and J.H. Jung. Towards the resolution of the Gibbs phenomena. J. Comput. Appl. Math., 161(1):41–65, 2003.
 [91] C.W. Shu. High order ENO and WENO schemes for computational fluid dynamics. In HighOrder Methods for Computational Physics, pages 439–582. Springer, 1999.
 [92] C.W. Shu. High order weighted essentially nonoscillatory schemes for convection dominated problems. SIAM Rev., 51(1):82–126, 2009.
 [93] S. Slattery, P. Wilson, and R. Pawlowski. The Data Transfer Kit: A geometric rendezvousbased tool for multiphysics data transfer. In International Conference on Mathematics & Computational Methods Applied to Nuclear Science & Engineering (M&C 2013), pages 5–9, 2013.
 [94] S. R. Slattery. Meshfree data transfer algorithms for partitioned multiphysics problems: Conservation, accuracy, and parallelism. J. Comput. Phys., 307:164–188, 2016.
 [95] E. Tadmor. Filters, mollifiers and the computation of the Gibbs phenomenon. Acta Numer., 16:305–378, 2007.
 [96] T. J. Tautges, C. Ernst, C. Stimpson, R. J. Meyers, and K. Merkley. MOAB: a meshoriented database. Technical report, Sandia National Laboratories, 2004.
 [97] C. R. Terai, P. M. Caldwell, S. A. Klein, Q. Tang, and M. L. Branstetter. The atmospheric hydrologic cycle in the ACME v0. 3 model. Clim. Dyn., 50(910):3251–3279, 2018.
 [98] P. A. Ullrich, D. Devendran, and H. Johansen. Arbitraryorder conservative and consistent remapping and a theory of linear maps: Part ii. Mon. Weather Rev., 144(4):1529–1549, 2016.
 [99] P. A. Ullrich and M. A. Taylor. Arbitraryorder conservative and consistent remapping and a theory of linear maps: Part i. Mon. Weather Rev., 143(6):2419–2440, 2015.
 [100] B. Van Leer. Towards the ultimate conservative difference scheme. V. a secondorder sequel to Godunov’s method. J. Comput. Phys., 32(1):101–136, 1979.
 [101] A. Viswanathan, D. Cochran, A. Gelb, and D. M. Cates. Detection of signal discontinuities from noisy Fourier data. In 2008 42nd Asilomar Conference on Signals, Systems and Computers, pages 1705–1708. IEEE, 2008.
 [102] H. Wendland. Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv. Comput. Math., 4(1):389–396, 1995.
 [103] H. Wendland. Local polynomial reproduction and moving least squares approximation. IMA J. Numer. Ana., 21(1):285–300, 2001.
 [104] Wikipedia contributors. Table of spherical harmonics — Wikipedia, the free encyclopedia, 2019. [Online; accessed 24July2019].
 [105] H. Wilbraham. On a certain periodic function. The Cambridge and Dublin Mathematical Journal, 3:198–201, 1848.
 [106] Z. Wu. Compactly supported positive definite radial functions. Adv. Comput. Math., 4(1):283, 1995.
 [107] Z. Xu, Y. Liu, H. Du, G. Lin, and C.W. Shu. Pointwise hierarchical reconstruction for discontinuous Galerkin and finite volume methods for solving conservation laws. J. Comput. Phys., 230(17):6843–6865, 2011.

[108]
M. Yang and Z.J. Wang.
A parameterfree generalized moment limiter for highorder methods on unstructured grids.
In 47th AIAA Aerospace Sciences Meeting, page 605, 2009.  [109] M. Zerroukat, N. Wood, and A. Staniforth. A monotonic and positive–definite filter for a semiLagrangian inherently conserving and efficient (SLICE) scheme. Quart. J. Roy. Meteor. Soc.,, 131(611):2923–2936, 2005.
 [110] M. Zerroukat, N. Wood, and A. Staniforth. The parabolic spline method (PSM) for conservative transport problems. Int. J. Numer. Meth. Fluids, 51(11):1297–1318, 2006.
 [111] Y.T. Zhang and C.W. Shu. Third order WENO scheme on three dimensional tetrahedral meshes. Comm. Comput. Phys., 5(24):836–848, 2009.
 [112] Z. Zhang and C. F. Martin. Convergence and Gibbs’ phenomenon in cubic spline interpolation of discontinuous functions. J. Comput. Appl. Math., 87(2):359–371, 1997.
 [113] D. Ziou and S. Tabbone. Edge detection techniques–an overview. Pattern Recognition and Image Analysis, 8:537–559, 1998.
 [114] A. Zygmund. Trigonometric Series. Cambridge Press, 1959.
Comments
There are no comments yet.