1 Introduction
In recent years, the interest in so called metamaterials has been growing rapidly. These materials are typically engineered on a nano or microscale to exhibit unusual properties, such as band gaps and negative refraction [26, 39, 27]. A common example is photonic crystals, which are composed of periodic cell structures interacting with light.
To model photonic crystals and other metamaterials properly, a numerical method that can handle such heterogeneous (or multiscale) media efficiently is needed. It is well known that classical finite element methods struggle to produce good approximations in this case, unless the mesh width is sufficiently small to resolve the microscopic variations intrinsic to the problem. Indeed, the mesh typically needs to be fine enough to resolve all features in the medium/metamaterial. This leads to issues with computational cost and available memory, which can be often only overcome by socalled multiscale methods. Multiscale methods are based on solving local decoupled problems on the microscale whose solutions are used to correct the original model and turn it into a homogenized macroscale model that can be solved cheaply. In the context of electromagnetic waves in multiscale media, corresponding methods were proposed in, e.g., [8, 24, 40, 35, 41, 20, 10, 25, 9].
In [17] a method based on the Localized Orthogonal Decomposition (LOD) framework was proposed for multiscale curlcurl problems with essential boundary conditions. The LOD technique was first introduced in [32] for elliptic equations in the space and was later developed to include many other types of equations and spaces, see, e.g., [2, 7, 18, 21, 29, 30, 31, 36, 33]
and the references therein. The technique relies on a splitting of the solution space into a coarse and a fine part, using a suitable (quasi) interpolation operator. The coarse part is typically a classical finite element space and the fine part contains all the details not captured by the coarse space. The basis functions of the coarse space are then enhanced by computing so called correctors in the detail space. Due to an exponential decay of these correctors, they are quasilocal and can be computed efficiently by solving small problems. The resulting space (consisting of the corrected coarse basis functions) can be used in a Galerkin approach to find approximations that exhibit superior approximation properties in the energy norm when compared with the original coarse space. These results hold without assumptions on the structure of the problem, such as periodicity, scale separation or symmetry.
Despite the wide range of successful applications, a construction of the LOD for conforming Nédélec finite element spaces turned out to be very challenging. The reason was the absence of suitable projection operators that are in the heart of the method (since corrections are always constructed in the kernel of such operators). The required projections need to be computable, local, stable, and they need to commute with exterior derivatives. With the groundbreaking work by Falk and Winther [14], such a projection was finally found and paved the way for extending the LOD framework to spaces [17]. However, so far, the construction by Falk and Winther is only explicitly available for de Rham complexes with natural boundary conditions, whereas the theoretical results in [17] are obtained for curlcurl problems with essential boundary conditions. This discrepancy (which turns out to be a nontrivial issue) prevented a practical implementation of the LOD until now. Hence, the main objective of this paper is to explain how the method can indeed be used for practical computations and to demonstrate its performance in numerical experiments. Here we have several goals. First, we show how the LOD can be extended from essential boundary conditions to natural boundary conditions. This enables us use the explicitly known FalkWinther projection constructed in [14], which we used in our implementation. For the arising method we prove that linear convergence is obtained in the norm whenever the source term fulfills . For general right hand sides we prove convergence of order . In order to restore a full linear convergence for general sources we propose the usage of socalled source term correctors. As it turns out, such source term correctors can be also a helpful tool to solve the problem with essential boundary conditions since it allows the usage of the same FalkWinther projection as for the case of natural boundary conditions. With this, we are able to practically solve the same types of problems as considered in [17]. All the proposed methods were implemented and we include numerical examples in both and to support our theoretical findings. To the authors’ knowledge this is also the first time that the edgebased FalkWinther projection was implemented and the code for the projection (using the FEniCS software [28]) is available at [1]. Finally, we shall also demonstrate numerically that if we use a projection that is locally not commuting with exterior derivatives, then the arising LOD suffers from reduced convergence rates. This shows that the required features of the projection are not only artifacts of the prooftechnique but are central to obtain a converging method.
Another method based on the LOD framework was recently proposed in [37]. A different projection is constructed by utilizing the discrete Helmholtz decomposition of the fine detail space. Numerical experiments show that the proposed projection has the localization property, but an analytical proof of this property remains open.
The paper is organized as follows. In Section 2, the curlcurl problem is introduced and in Section 3 the method for natural boundary conditions is described. Essential boundary conditions are briefly discussed in Section 4. Finally, numerical experiments are presented in Section 5. The construction of the FalkWinther projection and a discussion of how to implement it is left to the appendix.
2 Problem setting and notation
Given a computational domain , we consider the curlcurl problem of the following form. Find such that
(1)  
where is the outward unit normal to . In the context of the timeharmonic Maxwell’s equations, describes the (unknown) electric field and the right hand side is a source term related to current densities. The coefficients and are typically complexvalued and scalar parameter functions that describe material properties. In particular, they depend on electric permittivity and the conductivity of the medium through which the electric field propagates. We indirectly assume that the propagation medium is a highly variable “multiscale medium”, i.e. the coefficients and are rapidly varying on a fine scale. In this work we mainly focus on natural boundary conditions (or Neumanntype boundary condition) which are of the form on . In the context of Maxwell’s equations this corresponds to a boundary condition for (a 90 degree rotation of) the tangential component of the magnetic field.
In order to state a variational formulation of problem (1), we introduce a few spaces. For that purpose, let denote an arbitrary bounded Lipschitz domain. We define the space of functions on by
The space is equipped with the standard inner product
where is the complex inner product. Furthermore, we define the space of functions by
with corresponding inner product
The restriction of to functions with a zero normal trace is given by
If we drop the dependency on , we always mean the corresponding space with . With this we consider the following variational formulation of problem (1): find such that
(2) 
Observe that the natural boundary condition on is intrinsically incorporated in the variational formulation.
For problem (2) to be welldefined, we shall also make a set of assumptions on the data, which read as follows.

[label=(A0)]

is an open, bounded, contractible polyhedron with a Lipschitz boundary.

The coefficients fulfill and .

The source term fulfills .

The coefficients are such that the sesquilinear form given by (2) is elliptic on , i.e. there exists such that
We assume that this property is independent of the computational domain (i.e. we can restrict the sesquilinear form to subdomains of and have still ellipticity).
Under the above assumptions (A1)(A4), we have that (2) is welldefined by the LaxMilgramBabuška theorem [5]. Note that none of the assumptions is very restrictive and that they are fulfilled in many physical setups (cf. [17, 16] for discussions and examples). In [40] it was also demonstrated that the coercivity can be relaxed to an infsupstability condition.
3 Numerical homogenization for natural boundary conditions
In this section, we follow the arguments presented in [17] for the case of essential boundary conditions (i.e. on ) to transfer them to the case of natural boundary conditions as given in (1).
3.1 Basic discretization and multiscale issues
Let us start with some basic notation that is used for a conventional discretization of (1). This setting will be also used to exemplify why such a standard discretization of multiscale problems is problematic.
Throughout this paper, we make the following assumptions on the discretization.

[label=(B0)]

is a shaperegular and quasiuniform partition of , in particular we have .

The elements of are (closed) tetrahedra and are such that any two distinct elements are either disjoint or share a common vertex, edge or face.

The mesh size (i.e. the maximum diameter of an element of ) is denoted by .
On we consider lowestorder ,  and conforming elements. Here, denotes the space of piecewise affine and continuous functions (i.e. scalarvalued firstorder Lagrange finite elements) and we set as the subset with a zero trace on . The lowest order Nédélec finite element (cf. [34, Section 5.5]) is denoted by
Later on, we will also need the lowestorder Raviart–Thomas finite element space, which is given by
With this, a standard conforming discretization of problem (2) is to find such that
(3) 
As a Galerkin approximation, is a quasibest approximation in the
norm and fulfills the classical estimate
(4) 
A serious issue is faced if and are multiscale coefficients that are rapidly oscillating with a characteristic length scale of the oscillations that is of order . In this case, both and will blow up, even for differentiable coefficients and . Even worse, if and are discontinuous, then the required regularity might not be available at all (cf. [11, 12, 6]). In such a setting the estimate (4) becomes meaningless, as convergence rates can become arbitrarily slow. On top of that, convergence can only be observed in the fine scale regime . This imposes severe restrictions on the mesh resolution and the costs for solving (3) can become tremendous. It is worth to mention that the picture does not change if we look at the error in the norm. This is because and are typically of the same order, due to the large kernel of the curloperator. In particular, for large mesh sizes the space does not contain any good approximations of . This is a crucial difference to elliptic problems, where good approximations on coarse meshes are at least theoretically available (though they are typically not found by standard Galerkin methods).
3.2 De Rham complex and subscale correction operators
As discussed in the previous subsection, a conventional discretization of the problem (2) is problematic as it requires very fine meshes and hence very highdimensional discrete spaces. Motivated by this issue, the question posed in [17] is whether it is possible to construct a quasilocal subscale correction operator that acts on and which only depends on (but not on the source term ), such that the original multiscale problem can be replaced by a numerically homogenized equation. By “numerically homogenized equation” we mean an equation that can be discretized with a Galerkin method in the standard (coarse) space , such that the corresponding (homogenized) solution is a good coarse scale approximation of (in the dual space norm , cf. [17] for details) and such that (i.e. homogenized solution plus corrector) yields a good approximation of in the norm. The (numerically) homogenized equation takes the form
and provided that is available, this can be solved cheaply in a coarse (and hence lowdimensional) space . In [17], such a homogenized problem was obtained for essential boundary conditions using the technique of Localized Orthogonal Decompositions (LOD).
The central tool in the construction of a suitable corrector operator is the existence of local and stable projection operators that commute with exterior derivatives. To be precise, we consider the de Rham complex with natural boundary conditions of the form
and the corresponding finite elements subcomplex (of lowest order spaces)
where is the space of piecewise constant functions. In this setting (and in more general form), Falk and Winther [14] proved the existence of stable and local projections between any Hilbert space of the de Rham complex and the corresponding discrete space of the finite element subcomplex, so that the following diagram commutes:
Here, stands for vertex, stands for edges, for faces and for tetrahedra. In particular (and for our setting very essential) is the commutation
(5) 
The operators are local (i.e. local information can only spread in small nodal environments) and they admit local stability estimates, for , of the form
(6) 
for all , where stands for either of the spaces , or , the seminorm is given by where stands for the exterior derivative on and stands for the corresponding projection operator from the commuting diagram. The constant in the estimate is generic and the patch is a small environment of that consists of elements from and which has a diameter of order . We refer the reader to [14, Theorem 2.2] for further details on this aspect.
The idea proposed in [17] is now to correct/enrich the conventional discrete spaces by information from the kernel of the projections. Due to the direct and stable decomposition
where , we know that the exact solution to the curlcurlproblem (2) can be uniquely decomposed into a coarse contribution in and a fine (or detail) contribution in , i.e.
(7) 
In order to identify and characterize the components of this decomposition, a Green’s corrector operator
can be introduced. For it is defined by the equation
(8) 
It is welldefined by the LaxMilgramBabuška theorem due to assumption (A4) and since is a closed subspace as the kernel of a stable projection. The main feature of the Green’s corrector is that is allows us to characterize the stable decomposition of the exact solution to problem (2) as
(9) 
where is the coarse part. The operator is the differential operator associated with , i.e. .
3.3 Idealized multiscale method for problems
In the last subsection we saw that the exact solution can be decomposed into , where . This motivates to define the ideal corrector operator as
and to consider a numerical homogenized equation of the form: find such that
(10) 
Since this is a Galerkin approximation, Céa’s lemma implies that is an quasi best approximation of in and hence
In fact, if and are selfadjoint, it can be shown that and hence the error has the exact characterization (this can be proved analogously to [17, Theorem 10]).
As we just saw, in order to quantify the discretization error , all we have to do is to estimate . Since is a contractible domain, we know that admits a regular decomposition (cf. [22, 23]) of the form
(11) 
and with
(12) 
Exploiting the commuting property of the FalkWinther projections, we obtain that we can write as
(13) 
By the BrambleHilbert lemma together with the local stability estimates (6) we easily see that for any it holds
(14) 
and analogously
(15) 
Again, is a generic environment of that consists of few (i.e. ) elements from . Observe that this yields a local regular decomposition of interpolation errors in the spirit of Schöberl (cf. [38, Theorem 1] and [17, Lemma 4]).
Next, we use the identity (13) in the definition of the Green’s corrector to obtain
(16)  
Using the interpolation error estimates (14) and (15) for the FalkWinther projections, we immediately have
(17) 
and
(18) 
It remains to estimate the last term. For that, let denote the set of faces (from our triangulation) that are located on the domain boundary . We obtain with the local trace inequality
(20) 
Here we used the regularity of the mesh and that each neighborhood contains only elements. Combining the estimates (16), (17), (18) and (20), we obtain the following conclusion.
Conclusion 3.1.
From Conclusion 3.1 we see that in general we expect the error to be dominated by the term . Note that in the case of essential boundary conditions, the problematic term will always vanish since the corresponding regular decomposition guarantees that on (cf. [22, 38]).
Before we discuss how to restore the linear convergence for source terms with arbitrary normal trace, we need to discuss how to localize the corrector operator .
3.4 Localized multiscale method for problems
From a computational point of view, the ideal multiscale method (10) is not feasible, as it requires the computation of global correctors for each nodal basis function of the Nédélec space . The equation that describes a corrector reads
This problem has the same computational complexity as the original curlcurl problem. Solving it for every basis function would hence multiply the original computing costs. To avoid this issue, we first split the global corrector into a set of element correctors . For a tetrahedron , the corresponding element corrector is defined as
(21) 
where . Obviously, we have
(22) 
The advantage of this formulation is that we can now exploit the strong localization of the element correctors . In fact, they show an extremely fast decay to zero away from the element . The decay can be quantified and is exponential in units of . This is a consequence of the local support of the source term (i.e. ) and the fact that maps into the kernel of the FalkWinther projection. This allows to restrict the computational domain in (21) to small environments of that have a diameter of order . The decay was first proved in [17, Theorem 14] for the case of essential boundary conditions (i.e. for on ). For natural boundary conditions we can proceed similarly (with minor modifications) to obtain the following result.
Lemma 3.2.
Given , we define the (open) layer neighborhood recursively by
On these patches (for ) we define the truncated element correctors
as solutions to
(23) 
For , an approximation of the global corrector is given (in the spirit of (22)) by
The error between the truncated approximation and the ideal corrector can be bounded by
where and are generic constants that are independent of , or the speed of the variations of and .
Remark 3.3 (Boundary conditions of the truncated element correctors).
Observe that the truncated element correctors given by (23) have mixed boundary conditions. On they fulfill essential boundary conditions, i.e. , and on they fulfill natural boundary conditions, i.e. . This is in contrast to the ideal element correctors given by (21) which only involve natural boundary conditions.
From Lemma 3.2 we see that that we can approximate the global corrector efficiently by computing a set of element correctors with a small local support . With this we can formulate the following main theorem on the overall accuracy of localized multiscale scheme (which belongs to the class of LOD methods, cf. [17, 19, 32]).
Theorem 3.4.
Let denote the exact solution to the curlcurl problem with natural boundary conditions. Furthermore, let denote the corrector approximation from Lemma 3.2 and let denote the unique solution to the numerically homogenized equation
Then the following error estimates holds
(24) 
In particular, if and , then we have optimal order convergence in , i.e. it holds
Proof.
Theorem 3.4 shows that we can still achieve the same order of accuracy as for the ideal multiscale method (10) if the number of layers is selected proportional to . Practically this means that in order to compute a sufficiently accurate corrector , we need to solve local problems on small patches with a diameter of order . The number of these local problems equals the number of all tetrahedra times the number of edges of a tetrahedron (which is equal to the number of basis functions of that have support on a tetrahedron ). Hence, the total number of local problems is . All these problems are small, cheap to solve and they are fully independent from each other, which allows for parallelization in a straightforward way.
There are two more issues to be discussed, first, we need a computational representation of the FalkWinther projection operator . As this summarizes the results of the previous works [14, 17] and in particular [15], we postpone it to the appendix. The second issue that remains is the question if and how we can restore a full linear convergence for the error in (24), if does not have a vanishing normal trace. We will investigate this question in the next subsection.
3.5 Source term and boundary corrections
In Theorem 3.4 we have seen that we can in general not expect a full linear convergence in the mesh size , unless . The estimate (24) suggests a convergence of order , which we can also confirm numerically (cf. Section 5.2). Hence, as the full linear rate is typically not achieved for arbitrary source terms, this poses the question if we can modify the method accordingly.
One popular way to handle the influence of dominating source terms or also general (inhomogeneous) boundary conditions is to compute source and boundary corrections (cf. [19] for details in the context of elliptic problems). Here we recall decomposition (9), which says that we can write the exact solution as
Practically, we can approximate in the spirit of Lemma 3.2 by a corrector based on localization. This means, that for each , we can solve for an element source corrector with
The global source corrector is then given by
After this, we can solve the corrected homogenized problem which is of the following form: find such that
for all . The final approximation is given by
and it possible to prove that the error converges exponentially fast in to zero:
The error estimate can be proved using similar arguments as in the proof of Theorem 3.4. Here we recall that is independent of and . By selecting , we can achieve the convergence rate for any . For very large (i.e ) the method is exact.
Remark 3.5 (General boundary conditions).
Note that this strategy can be also valuable for handling general nonhomogenous boundary conditions. For example, considering the boundary condition on (which can be straightforwardly incorporated into the variational formulation), it is possible to compute boundary correctors to increase the accuracy. This works analogously to the case of source correctors, i.e. one needs to compute element boundary correctors for each boundary edge . The edge boundary correctors solve problems of the form
Though the above strategy can potentially yield any degree of accuracy, it involves a computational overhead, since it additionally requires to compute the whole set of element source correctors . One way to reduce this overhead is to only compute correctors for which is close to the boundary. For example, we can set
In this case we have for the error
Here, the first three terms show an exponential convergence with order , whereas the last term is not polluted by the boundary influence and converges with order , since .
4 Numerical homogenization for essential boundary conditions
In the following we want to consider the curlcurlproblem with essential boundary conditions as originally done in [17], i.e. we seek such that
(25)  
For the variational formulation we denote the restriction of to functions with a vanishing normal trace on by
Hence, the weak formulation of (25) becomes: find with
Assuming again (A1)(A4), this problem is wellposed.
To discretize the problem, let the subspace of which contains functions with a homogenous tangential trace to be given by
With this a multiscale approximation analogously to the case of natural boundary conditions (cf. Section 3) can be obtained by considering FalkWinther projections between the spaces with vanishing traces (cf. [17, 14]), i.e. local and stable projections
with the commuting property for all . In this setting, the following result can be proved (see [17]):
Recall the notation from Section 3.4. For and , let the local detail space be given by
and the element correctors be the solutions to
(26) 
We define the global corrector as usual by . With this, the final multiscale approximation is given by with
(27) 
and it holds the error estimate
Comments
There are no comments yet.