1 Introduction
Motivation.
It is wellknown that an accurate numerical solution in highfrequency wave problems is difficult to obtain. Acoustic or electromagnetic wave propagation problems with high frequency in the timeharmonic setting lead to partial differential equations (PDEs) with an indefinite Helmholtz or Maxwell operator, respectively. Therefore, many advanced solver techniques for Hermitian positive definite discrete systems are not directly applicable to these problems. Additionally, finite element discretizations are typically dependent on satisfying the Nyquist stability criterion, implying that all propagating wave frequencies must be “resolved” to a certain extent in order to have a stable discretization. Even though advanced finite element methods such as the discontinuous PetrovGalerkin (DPG) method can circumvent the stability problem and deliver a robust discretization for any wave number
[demkowicz2012wavenumber, petrides2019phd, petrides2017multigrid], they do not eliminate the numerical pollution error in multiple dimensions. Pollution can manifest itself in different forms: commonly, we observe a diffusive effect causing wave attenuation and/or a dispersive effect resulting in a phase shift. It is critical to control the pollution error for obtaining accurate results.Background and literature.
Numerical pollution in wave propagation has been studied extensively for the BubnovGalerkin finite element method, as well as discontinuous Galerkin methods, leastsquares methods, and various other approaches (see [babuska1997pollution, ihlenburg1997finite, babuska1999dispersion, ainsworth2006dispersive, melenk2011conv, demkowicz2012wavenumber] and references therein); however, to the best of our knowledge numerical studies have mostly been limited to acoustic wave problems with a moderate number of wavelengths. In this paper, we are reporting numerical results for the DPG method applied to the 3D vectorial timeharmonic Maxwell problem in long waveguides with more than 8000 wavelengths and high order of approximation. We attempt to corroborate theoretical results by Melenk and Sauter [melenk2011conv] and provide guidance as to how to best discretize wave problems with high frequency.
Outline.
First, we introduce the DPG methodology and we discuss how the pollution error enters the DPG formulation as a perturbation parameter in the best approximation, in contrast to the classical BubnovGalerkin method where the perturbation parameter is present in the stability constant. We briefly go over the mathematical setting for the timeharmonic Maxwell equations in a linear waveguide in the context of the broken ultraweak variational formulation. For the numerical pollution study, we are reporting the experiments for a simple case with one propagating mode in a rectangular waveguide with the intent of eliminating other effects unrelated to pollution. We discuss our findings in the light of theoretical pollution error estimates from the literature. For waveguides with multiple propagating modes, we discuss different adaptive refinement strategies; in particular, we show numerical results for multimode fiber simulations illustrating the need for mesh adaptivity in the numerical solution to such problems. In the case of a distributedmemory parallel simulation, we emphasize the importance of dynamic load balancing for this particular problem.
2 The DPG methodology
The DPG method of Demkowicz and Gopalakrishnan [demkowicz2011part2, demkowicz2012part3, demkowicz2017dpg] has been used to solve applications in viscoelasticity [fuentes2017viscoelasticity], acoustic and electromagnetic wave propagation [petrides2017multigrid, nagaraj2018raman, henneking2019fiber], compressible fluid dynamics [chan2014dpg], and linear elasticity [keith2016elasticity]. Its stability properties make it particularly applicable to highfrequency wave propagation problems, where preasymptotic stability is essential for driving efficient adaptivity [petrides2019phd]. Through onthefly computation of problemdependent optimal test functions, the DPG method guarantees a stable discretization for any wellposed linear variational problem.
2.1 The ideal DPG method
Consider an abstract variational problem of the form,
(2.1) 
where (trial space) and (test space) are Hilbert spaces, and is a continuous bilinear (sesquilinear) form on (with continuity constant ),
(2.2) 
that satisfies the continuous infsup condition (with infsup constant ),
(2.3) 
And, the continuous linear (antilinear) form satisfies the compatibility condition,
(2.4) 
Let and denote the space of continuous linear (antilinear) functionals on and , respectively. By the BabuškaNečas theorem, the variational problem (2.1) is wellposed, i.e., there exists a unique solution that depends continuously upon the data,
(2.5) 
Consider finitedimensional subspaces and , where , and the corresponding discrete abstract variational problem,
(2.6) 
If the discrete infsup condition is satisfied, i.e.,
(2.7) 
then the discrete problem (2.6) is wellposed; and by Babuška’s theorem [babuska1971error],
(2.8) 
where is the exact solution of (2.1), is the stability constant, and the best approximation error is measured in the trial norm . The continuous infsup condition (2.3) does not in general imply the discrete infsup condition (2.7).
In the DPG method, the issue of discrete stability is solved by finding a unique test space, called the optimal test space . Given any trial space , define its optimal test space by,
(2.9) 
where the trialtotest operator is defined by,
(2.10) 
For any , equation (2.10) uniquely defines by the Riesz representation theorem. Let denote the linear operator induced by the form ,
(2.11) 
where denotes the duality pairing on . Then, , where is the Riesz map. In other words, for every trial function , the trialtotest operator defines a unique optimal test function, . The optimal test functions realize the supremum in the infsup condition. Indeed,
(2.12) 
Therefore, , i.e., discrete stability is guaranteed by construction.
2.2 Breaking the test space
In the discussion so far, we have neglected computational aspects of the DPG method. One question that arises immediately in the context of practicality is the cost of the inversion of the global Riesz map . Let denote the global domain of interest, with Lipschitz boundary , and let denote a suitable finite element triangulation of , with mesh skeleton . By “breaking” the test space, i.e., employing a larger discontinuous test space, , the inversion of the Riesz map on is localized and can be done independently elementwise. The elementlocal computational costs are still significant but they can be parallelized efficiently and fast integration techniques can be implemented to accelerate computation by more than one order of magnitude [mora2019fast, badger2019fast]. By reducing the regularity of the test space, new interface terms arise on the mesh skeleton with interface unknowns . The resulting variational problem is,
(2.13) 
where denotes an appropriate duality pairing on the mesh skeleton. The new interface unknowns may be interpreted as Lagrange multipliers [demkowicz2017dpg].
The stability of the formulation with broken test spaces is inherited from the continuous problem. In particular, the broken formulation (2.13) is wellposed with a meshindependent stability constant of the same order as the stability constant for the continuous problem [carstensen2016breaking].
2.3 The practical DPG method
Until now, the trialtotest operator has only been defined in the infinitedimensional setting (2.10). To compute optimal test functions in practice, the inversion of the Riesz map must be approximated on a truncated finitedimensional test space , , also called the enriched test space.
With the Riesz map defined on the truncated test space, , the approximate trialtotest operator is defined by,
(2.14) 
where is the inclusion map.
Consequently, the practical DPG method with optimal test functions solves,
(2.15) 
with the additional interface term from (2.13) when breaking the test space .
Because the optimal test functions are approximated, some stability loss is inevitable. Several papers have addressed the issue of controlling and quantifying the stability loss in the DPG method [gopala2014practical, nagaraj2017fortin].
3 Pollution study
We first introduce the functional setting for the broken ultraweak DPG formulation of the timeharmonic Maxwell equations in a linear waveguide. Then, we discuss how the frequency enters the DPG error analysis, and how an refinement strategy can effectively control the pollution error. We report numerical results for the propagation of the fundamental mode in a rectangular waveguide.
3.1 Function spaces
For our timeharmonic Maxwell model problem, we define the following standard energy spaces on a bounded domain with Lipschitz boundary :
(3.1) 
where is the norm.
Suppose is partitioned into a set of open disjoint elements with Lipschitz element boundaries . The broken energy spaces defined on the finite element mesh are:
(3.2) 
Additionally, we must define energy spaces for the trace unknowns that arise from breaking the test space. These spaces are defined on the mesh skeleton . In the model problem, we need the following trace space:
(3.3) 
where the continuous and surjective tangential trace operator is defined elementwise [carstensen2016breaking]:
(3.4) 
Finally, we introduce the minimum energy extension norm for the trace unknowns:
(3.5) 
3.2 Problem formulation
We consider the linear, isotropic timeharmonic Maxwell problem:
(3.6) 
where and
are the complex vectorvalued timeharmonic electric and magnetic field, respectively;
is the angular wave frequency; is the outward unit normal; and and are the scalarvalued electric permittivity and magnetic permeability, respectively. And we assume sufficiently regular boundary data.3.2.1 Ultraweak formulation
The ultraweak variational formulation is obtained by testing (3.6) with test functions (), integrating over , and relaxing both equations:
(3.7) 
3.2.2 Broken ultraweak formulation
By breaking the test space, we introduce new trace unknowns on the mesh skeleton . The broken ultraweak variational formulation is:
(3.8) 
where denotes elementwise operations, and,
(3.9)  
(3.10) 
3.3 Pollution estimates
The mathematical setting for different variational formulations of the timeharmonic Maxwell equations in context of DPG is analyzed in much detail in [carstensen2016breaking]. We recap a few points that are relevant for our discussion regarding the ultraweak formulation.
Define the following group variables:
(3.11) 
The Maxwell operator from (3.6) can be written as:
(3.12) 
then, the bilinear form corresponding to the ultraweak formulation (3.7) is:
(3.13) 
with the formal Adjoint operator defined by:
(3.14) 
In the ideal ultraweak DPG method with unbroken test spaces, the Adjoint graph norm, , for the test space yields the optimal test norm; with this norm, the method delivers the projection. For the broken formulation (3.8) we define an additional trace group variable, , and obtain the bilinear form:
(3.15) 
The broken ultraweak variational problem (3.8) can then be written as:
(3.16) 
The optimal test norm is not localizable, but we can augment it with an additional term to obtain a quasioptimal test norm: , with scaling parameter . The quasioptimal test norm is robustly equivalent with the optimal test norm, i.e. independent of the frequency , and the robust stability constant is maintained in the broken formulation [carstensen2016breaking]. This implies that the approximation error is bounded by the best approximation error (BAE) uniformly in :
(3.17) 
where constant is independent of the mesh and frequency , and refers to the minimum energy extension norm defined in (3.5). The estimate also implies that the best approximation is pollution free because it is independent of . In one dimension, the BAE for the traces is zero, thus the method is in fact pollution free [demkowicz2012wavenumber, petrides2017multigrid]. In multiple dimensions, however, the BAE for the traces, measured in , does depend on the frequency and the method exhibits numerical pollution.
The estimate for the standard Galerkin method on the other hand has a stability constant that is not independent, thus the Galerkin discretization is not robustly stable. The DPG method hides the perturbation parameter in the best approximation and by doing so yields a stable discretization for any wavenumber. This can practically be exploited by starting computation on a coarse mesh where the pollution error is high, and driving adaptivity with the DPG error indicator. This approach yields superior meshes for resolving localized waves [petrides2017multigrid, petrides2019phd].
A wavenumber explicit analysis for the Helmholtz equation is presented for the DPG method in [demkowicz2012wavenumber] and for the Galerkin method in [melenk2011conv]. For the Galerkin discretization, Melenk and Sauter show that quasioptimality is obtained under the conditions that is sufficiently small and is at least , where is the mesh size and the polynomial order of approximation [melenk2011conv]. Based on these estimates, the best approach to dealing with the pollution error may be an strategy that preferably increases the polynomial order while keeping constant for increasing frequency.
In the next section, we study the pollution error with numerical experiments for many wavelengths and discuss the observations with regard to the suggested strategy and its applicability to the DPG method for the timeharmonic Maxwell problem.
3.4 Numerical results
The propagation of an electromagnetic field in a waveguide is governed by the Maxwell equations. We assume that the timeharmonic setting is justified and that the waveguide medium is nonmagnetic, dielectric, and no free charges are present. While nonlinear effects and anisotropic, inhomogeneous material properties play important roles in research on fiber optics, for the purpose of this pollution study we will assume the waveguide medium is linear, isotropic and homogeneous. If we prescribe idealized PEC boundary conditions, then under these assumptions the Maxwell equations reduce to (3.6
). Note that in fact we do prescribe nonzero Dirichlet boundary conditions (BCs) at the input to excite the waveguide and impedance BCs at the output but PEC BCs everywhere else. For this simplified setting, the propagating field in a waveguide can be described as a superposition of guided modes. These modes are eigenfunction solutions to the reduced scalar Helmholtz problem which can be solved analytically for simple domains (e.g, see
[griffiths, jackson]). Consider a rectangular waveguide with the domain , in Cartesian coordinates:where is the length of the waveguide. The fundamental mode in this waveguide is the transverse electric mode, depicted in Fig. 1. The fundamental mode is not very oscillatory in the transverse direction. At the waveguide end, we employ an absorbing impedance boundary condition that matches the wave impedance for the fundamental mode. In the rectangular waveguide experiments, the crosssection is modeled with two hexahedral elements which is justified by the simple transverse mode profile (cf. Fig. 1).
In our first experiment, we analyze the relative field error, measured in the norm, for the propagating fundamental mode in waveguides of different length . The smallest waveguide has a length equivalent to one wavelength of the fundamental mode, and the longest one has 8192 wavelengths. As we increase the length
, we keep the number of elements per wavelength (i.e., degrees of freedom (DOFs) per wavelength) constant. In particular, we choose a discretization with four elements per wavelength. Fig.
2 shows the relative field error for these waveguides for uniform order of approximation, ranging from to . In all numerical experiments, we are using the enrichment order for the test space to approximate optimal test functions.We make several observations: for a fixed number of wavelengths, higher polynomial order yields significantly smaller (more than one order of magnitude) errors, as expected; for every order of approximation, the field error starts to increase if the waveguide is long enough despite keeping the DOFs per wavelength constant; and for higher , this pollution effect is “kicking in” at a later point, i.e., more wavelengths can be computed with higher order before the pollution error is measurable. Furthermore, to maintain some desired accuracy, one needs to increase the polynomial order in nearly regular intervals. For example, to achieve accuracy for 4 wavelengths, it is sufficient to use ; at 64 wavelengths, is needed; with , computing up to 1024 wavelengths is feasible with this error margin; and would most likely be sufficient for 16384 wavelengths. At a closer look, these intervals resemble a logarithmic dependency on the polynomial order . In other words, these results corroborate theoretical estimates by Melenk and Sauter predicting that control of the pollution error would require increasing logarithmically with the wavenumber.
In our experiments, the pollution was primarily a diffusive error causing wave attenuation. This is in agreement with previous observations for the DPG method [demkowicz2012wavenumber]. A practical way of measuring this diffusivity in waveguide applications is to compute power flux through the crosssection of the waveguide at different points in . In a linear, dielectric waveguide with PEC boundary conditions the fundamental mode should be carried without loss of power. Fig. 3 shows the measured power loss between the waveguide input and output for different polynomial orders. Note that has less than loss of power in all tested waveguides. The pollution error is clearly visible in terms of power loss. We also observe the same logarithmic dependency for increasing polynomial order, illustrated by the nearequidistant parallel character of the lines.
Moving on to additional experiments, we keep our focus on the same rectangular waveguide but with different potential approaches of dealing with the pollution error. It may be reasonable to assume that since the wave is propagating in one direction (along ), it will be sufficient to increase the order of approximation anisotropically or to increase the number of elements through anisotropic refinements in . Exploring both of these options (cf. Fig. 4), we find that neither one of these approaches yields satisfactory results. First, in Fig. 3(a), we use fifthorder polynomials in the radial discretization () of the waveguide and increase the anisotropic order from to . While the error decreases initially, it begins stagnating at (note that the error coincides almost exactly with ). The same observation is made for uniform order with varying number of elements per wavelength (ranging from 2 elements to 16 elements). Our findings indicate that the pollution error comes from an interplay between the mode resolution (radial discretization) and the wave resolution in the direction of propagation. In other words, increasing the number of DOFs anisotropically does not suffice asymptotically to control the pollution error.
Finally, we measure the loss of power for both anisotropic refinement cases, plotted in Fig. 5. Expectably, we observe the same stagnation in the diffusive pollution, consistent with the errors measured in the previous plot.
We have conducted these experiments on different waveguides (rectangular waveguides, circular waveguides, and stepindex fibers) with various propagating modes, and the observations are all consistent with the observations presented up to here; we therefore omit showing additional numerical results for those cases.
4 Adaptivity study
In our adaptivity study, we focus on a different aspect of resolving the propagating wave. We have shown that the interplay between resolving the wave along the direction of propagation and resolving the transverse mode profile is important in controlling the pollution error. For that reason, the finite element mesh should be sensitive to different mode profiles and adapt to resolve them appropriately. This is especially important in waveguide applications where significant transfer of power occurs between different guided modes. Our tool for adapting the mesh “onthefly” is the DPG residual that serves as an error estimator in the energy norm. We briefly recap the important points in the derivation of this indicator and proceed with numerical experiments in multimode stepindex fibers. Lastly, we will look at the load imbalance that results from adapting the mesh to different propagating modes.
4.1 The DPG error indicator
The DPG method can be reformulated as a minimum residual method [demkowicz2017dpg],
(4.1) 
where the residual in minimized in the dual test norm . Taking the Gâteaux derivative, we obtain a minimum residual formulation,
(4.2) 
Furthermore, we define the energy norm on the trial space by,
(4.3) 
We define as the Riesz representation of the residual,
(4.4) 
Notice that when minimizes the residual (4.1), then,
(4.5) 
We arrive at a mixed Galerkin formulation,
(4.6) 
The error measured in the energy norm can be computed explicitly,
(4.7) 
hence offers a builtin aposteriori error indicator. Finally, note that the choice of the test norm is critical, as it dictates the norm in which the method converges. The natural choice for the test norm in the ultraweak formulation is the adjoint graph norm [demkowicz2017dpg].
4.2 Multimode stepindex fibers
We consider a dielectric stepindex waveguide. More precisely, consider a weaklyguiding large mode area (LMA) stepindex fiber made of silica glass. See Tab. 1 for a description of the model parameters for the fiber. For weaklyguiding fibers, , and the guided modes are linearly polarized (LP) modes. The number is a “normalized frequency” that determines how many guided modes are supported by the particular fiber. For example, if , then the fiber is singlemode. LMA fibers have a relatively large core radius and support multiple modes; the fiber we are using has , and it supports four guided modes: . The fiber axis is assumed to be aligned with the axis, and the length of the fiber is . Fig. 6 illustrates the guided modes for this particular fiber, showing the magnitude of the electric field in the center of the fiber crosssection.
For multimode propagation we are using a perfectly matched layer (PML) absorbing boundary condition. We refer to [astaneh2018pml] for details on the derivation and implementation of a PML for the DPG method.
Description  Symbol  Value 

Wavelength  nm  
Core radius  
Cladding radius  
Core refractive index  
Cladding refractive index  
Numerical aperture  NA  
Vnumber 
For this particular fiber, the power confinement (amount of energy confined to the core region) of each mode is,
(4.8) 
Clearly, the optimal discretization of the fiber crosssection is different for each of these modes. That is, to capture the oscillations of the higherorder modes near the corecladding interface, a finer discretization is needed than for the fundamental mode . In particular, higherorder modes demand more refinements (or degrees of freedom) outside of the fiber core, when compared to the fundamental mode that is mostly confined to the core region. Therefore, one geometry cannot be optimal for capturing any propagating mode.
Suppose we are interested in simulating the transverse mode instability (TMI) phenomenon [eidam2011experimental] in active gain fiber amplifiers. The TMI is characterized by the chaotic transfer of energy between the fundamental mode and the higherorder modes. One challenge in computing a numerical solution to the resulting nonlinear Maxwell problem is to capture modes accurately when they occur. With mode instabilities, it is not known apriori which modes will be propagating in which parts of the fiber. Refining the initial geometry globally to better resolve higherorder modes increases the computational cost dramatically and may render the computation infeasible for large problem instances. Adaptivity, on the other hand, can be used to refine the mesh where it is needed for capturing these modes locally, and the overall computational cost will be kept significantly lower.
4.3 Numerical experiments
In the following experiments, we are aiming to establish the efficacy of adaptivity based on the DPG residual for resolving higherorder modes. In the broken DPG setting, the residual is computed through elementwise contributions, i.e.,
(4.9) 
where , denotes the th element. After each solve, elements are marked for refinement if they satisfy a certain criterion. We use a strategy for marking elements that is based on Dörfler’s marking [dorfler1996marking]:

Sort the element residuals in descending order, i.e.,
(4.10) 
Mark elements , where is the smallest integer for which the following is true:
(4.11) where .
At this point, with some choice of , elements have been marked for refinement. What is not as clear is how to optimally refine each marked element when the mesh supports anisotropic adaptive refinements in both element size and polynomial order . The choice will ultimately be problemdependent.
In our experiments, we choose an initial mesh with uniform polynomial order , two elements per wavelength in direction (direction of propagation), and a radial (transverse) hybrid discretization using curvilinear hexahedral and prismatic elements.
Fig. 7 illustrates the initial geometry discretization in the fiber crosssection (not drawn to scale): four prisms are used to model the center of the fiber core, and they are surrounded by three layers of four hexahedra each. We refer to these different layers as “domains” and enumerate them from 1 to 4 moving radially outward from the center of the fiber to the cladding boundary. The choice of the initial discretization was informed by the fiber geometry, the fact that all guided modes decay exponentially in the cladding region, and by conducting numerical tests primarily with the fundamental mode. For a relatively short fiber of 16 wavelengths, this initial geometry captures the fundamental mode very well with regard to several physical quantities of interest (e.g., conservation of power, mode confinement). Higherorder modes are not captured quite as well, and for fibers with many wavelengths (i.e., several hundred or a few thousand wavelengths) we observe more significant pollution errors in the propagation of these modes. For example, the errors can be observed in small oscillations of the mode powers along the fiber, diffusive pollution effects, or an unsteady power confinement ratio.
We use DPG to perform multiple adaptive mesh refinements, each based on the respective previous solution and residual, to test the residual error indicator for capturing different modes. As a test case, we will look at the 16 wavelengths fiber. The goal is to observe the sensitivity of the adaptive refinements toward specific modes that are propagating. We choose the parameter in (4.11) and proceed with four adaptive refinement steps after the initial solution. Note that mesh regularity requirements may perform some additional refinements to “close the mesh”, i.e., to obtain a mesh that is 1irregular.
First, we apply isotropic refinements for marked elements. Fig. 8 shows the domains of refinement in the fiber. Each plot illustrates how the mesh is successively refined for one particular guided mode propagating in the fiber. For the fundamental mode, the error indicator is marking elements for refinement primarily in the fiber core, where most of the energy is located. The first three refinement steps exclusively refine in the outer and inner core region. None of the adaptive refinements for higherorder modes refine inside the inner core region. It is notable how sensitive the error indicator is to these different modes. In the case of the mode, the code primarily refines in the outer cladding region; this is likely to be the case because the initial cladding geometry discretization is too coarse to handle the exponential decay of the remaining energy in the transverse field.
Next, we repeat the experiment with anisotropic (radial) refinements. Radial refinements are of interest because higherorder modes are only more oscillatory in the transverse field, but they are not more oscillatory in the direction of propagation. In other words, the guided modes have very similar propagation constants (in fact, higherorder modes oscillate slightly slower than the fundamental mode). Therefore, if the numerical pollution is low for the fundamental mode, we may assume that the resolution in the direction of propagation is “good enough” for approximating any higherorder guided modes. Then, radial refinements (in or ) are the more economical way of capturing these modes. For anisotropic adaptive refinements, depicted in Fig. 9, a similar pattern emerges for the higherorder modes but the picture is quite different for the mode. The fundamental mode repeatedly refines elements in the same domain, indicating that the anisotropic refinements do not decrease the local residuals in a way the isotropic ones did. This indicates that the fundamental mode is already well approximated in the transverse field and the residual demands refinements in direction for better accuracy of the numerical solution. This interplay between the resolution in different directions is critical when studying the pollution error for guided modes.
Fig. 10 shows how the total residual evolves in both scenarios: we observe that the higherorder modes benefit much from anisotropic refinements, making this the preferred choice for improving the numerical solution with fewer degrees of freedom. For the fundamental mode, we find that the residual does not further decrease through anisotropic refinements indicating the mode is captured quite well by the initial current geometry.
4.4 Load balancing
In the parallel computation of the fiber problem, we partition the geometry into subdomains, each owned by one distinct MPI process. The rank of each MPI process is the ID of the subdomain it owns. Initially, we partition the fiber directly based on geometric cuts orthogonal to the fiber axis. Fig. 11 illustrates what the partitioning looks like for four subdomains.
This initial static partitioning is a good choice because it keeps the interfaces between subdomains small, making it possible to compute large fibers in parallel with a nested dissection solve and obtain good weak scaling. As the adaptive mesh refinements proceed, we must dynamically repartition the domain to retain load balance. A large number of different repartitioners are available in open source software packages, such as Zoltan
[ZoltanOverviewArticle2002]. We believe that in this instance, graph partitioning that strives for minimum cuts is a good choice because it keeps the subdomain interfaces relatively small. In the broken ultraweak Maxwell formulation, we are using element interior dofs (electromagnetic fields: ) as weights for graph vertices, and trace DOFs on faces (electromagnetic fluxes: trace) as weights for the graph’s edges. We are omitting connectivities from edge degrees of freedom to provide a sparser graph and accelerate partitioning. ParMetis or PTScotch can be used for approximating the partitioning problem. As an alternative, we use a custom dynamic fiber repartitioner that forces orthogonal cuts through the domain while trying to maximize load balance and minimize data migration, similar to recursive coordinate bisection partitioners. This custom repartitioner can perform orders of magnitude faster than graph partitioning because it relies primarily on geometry information.We are studying how the workload in different subdomains changes without repartitioning. Fig. 12 and Fig. 13 show the workload per MPI rank in a fiber of 16 wavelengths, partitioned into 8 subdomains, with adaptive isotropic and anisotropic refinements, respectively. Both plots show the results for the two higherorder modes and . Here, the workload is simply shown as the number of subdomain interior DOFs, i.e., all solution DOFs that are part of a subdomain excluding the trace DOFs on the subdomain interfaces.
Isotropic refinements lead expectedly to higher load imbalance because each isotropic refinement increases the local DOFs by another factor of two compared to the anisotropic refinement. Two observations stand out: firstly, the subdomains closer to the fiber input appear to exhibit higher residuals hence more refinements are observed in that region; secondly, towards the end of the fiber many refinements are picked up in the sixth subdomain, and almost none in the very last one. The latter observation is an effect from the PML boundary layer at the fiber end. In this short fiber, the PML is active in the last two subdomains (i.e., the layer encompasses about four wavelengths). When the wave enters the boundary layer, it exhibits exponential decay due to the coordinate stretching. This initial decay must be captured accurately by the numerical solution. We see that the DPG residual recognizes the need for more refinements in this region and marks elements in the sixth subdomain. By the time the wave enters the last subdomain, owned by rank seven, it has decayed so far that the residual remains fairly small and almost none of the elements are marked in the adaptive procedure.
It is evident that dynamic load balancing is critical in the simulation of the TMI phenomenon or other applications with transfer energy between guided modes. Through repartitioning of the fiber domain, we obtain significant speed up in the solve, especially for our parallel nested dissection solver.
5 Summary
We have studied the effect of numerical pollution for guided wave problems with many wavelengths. We found that the pollution error in the ultraweak DPG setting is primarily present in the form of a diffusive effect that causes attenuation of the propagating wave. This is consistent with previous observations for the DPG method where the phase error appears to be relatively small. Furthermore, we were able to corroborate theoretical estimates by Melenk and Sauter, indicating a logarithmic dependency of the pollution error on the polynomial order of approximation. Based on our numerical results, we agree that the best strategy for resolving many wavelengths is one based on refinements that first “capture” the wave (discretizing each wavelength by a few elements with moderate so that is sufficiently small) and subsequently increase the polynomial order with if needed while keeping constant. For the 3D waveguide problem where the wave is truly a superposition of transverse modes propagating in one direction, we emphasize that, perhaps counterintuitively, controlling the pollution error asymptotically required additional degrees of freedom in both the transverse direction and the direction of propagation.
For controlling the error in a multimode waveguide, we described suitable adaptive refinement strategies. The error indicator we used is based on the Riesz representation of the residual in the ultraweak DPG formulation. With broken test spaces this indicator is computed locally and in parallel. We have shown the importance of adaptive refinements in capturing different propagating modes in a fiber waveguide. For such problems, the efficacy of the DPG residual is remarkable as it sharply recognizes where the “energy” of the solution is located. The numerical results in the weaklyguiding optical fiber waveguide indicate that anisotropic adaptive refinements can be a much more efficient choice than isotropic adaptive refinements for these problems.
In a distributedmemory computation the repartitioning of the multimode fiber becomes essential to maintain load balance. We observed that the PML boundary layer requires additional refinements especially as the wave enters the PML region. Through dynamic repartitioning the load imbalance can be nearly eliminated and the time to solution decreases significantly. This is especially important in problems with localized features in the solution such as the transverse mode instability in fiber amplifiers.
Acknowledgement
This work was partially supported by AFOSR FA95501710090 and AFOSR 18RDCOR018.