1 Introduction
Around 2004 processor speed ceased to increase due to physical limits. Therefore we have experienced exponential growth in the number of processing units instead. This manifests in modern processing clusters featuring 1000 to more than ten million CPUs as of 2019^{4}^{4}4https://www.top500.org/, accessed 2019
. Meanwhile applications that are parallel in space frequently fail to efficiently utilize more than a few thousand CPUs (depending on problem size). To utilize this processing power massively parallel algorithms are required. For time dependent simulations the time dimension offers potential for further parallelization, however the classical timestepping concept is fundamentally sequential. Primarily for parabolic evolution equations timeparallel methods have been developed to break this "curse of sequentiality". After the first concept for ordinary differential equations more than 55 years ago
[24], a whole family of timeparallel methods of all kinds has blossomed, see [8] for a historical overview.Despite still being sequential in time the method [10] by Hackbusch laid some groundwork for timeparallel multigrid schemes and defied the notion of exactly solving the problem forward in time. Lubich and Ostermann took a different approach by presenting a waveform relaxation type spacetime multigrid method [18]. An in depth discussion of waveform relaxation methods can be found in the monograph [38]. A further variant described as spacetime concurrent multigrid waveform relaxation with cyclic reduction can be found in [13].
A contemporary method similar to ours is multigrid reduction in time [5] which has been actively expanded by a team centered around the Lawrence Livermore National Laboratory [5, 4, 6, 17, 9, 27]. Further results can be found on the XBraid project website^{5}^{5}5github.com/XBraid/xbraid/wiki/ProjectPublications. Another very active group is the Jülich Forschungszentrum with diverse publications about timeparallel methods [29, 30, 32, 31, 36, 40, 25, 33, 20, 16, 26, 34, 1, 35, 2].
A method, which can be described as the precursor to the next method, is spacetime multigrid with pointwise relaxation [14]. The method used in this paper [7] replaces the pointwise relaxation with block relaxation. Thus it can be described as spacetime multigrid with block relaxation.
Recent publications demonstrated that this method can also be applied to stochastic differential equations [22], and in combination with isogeometric analysis methods [12]
. This motivated us to investigate the application of our method to the eddycurrent equation. Furthermore we generalize concepts to expand our scope towards more complex problems.
The eddycurrent equations are used to study low frequency electromagnetic applications, like electric motors/generators, induction furnaces, nondestructive testing, and many more. In practice, timeharmonic approaches are often preferred over transient models due to lower computational costs. However timeharmonic approaches struggle with nonlinear materials (saturation effects) and moving geometry. As a stepping stone to these more complex problems, we only consider the linear case with constant geometry in this paper.
The remainder of this paper is organized as follows. In Section 2 the Spacetimeparallel method is introduced and the concept of a coarsening rule is presented and expanded. Section 3 applies the method to the eddycurrent equation and pinpoints the core problem of the naive approach. We continue to determine a remedy to this problem  namely a "nodal auxiliary space correction", which can be used to define a hybrid smoother. In Section 4, we perform a local Fourier analysis of the introduced hybrid smoother for the 2D curlcurl equation. In Section 5, we mimic the local Fourier analysis with numerical experiments for the 3D eddycurrent equation. We further investigate the validity of a new generalized coarsening rule. Weak scaling results of the improved method are given in Section 6. Conclusions are drawn in Section 7 and we discuss possible implications of our findings for related models and methods.
2 The Method
2.1 The SpaceTime Parallel Method
For simplicity we reiterate the spacetime parallel multigrid method presented in [7] and [28] only for the case of the implicit Euler scheme with uniform step width . It should be noted, that we are not concerned with the approximation quality, but we only want to solve a given discretized problem. After the usual discretization steps for a PDE in space and time (see Chapter 3 for an example) we need to solve an equation of the form
(1) 
By defining we can rewrite the timestepping scheme as a huge linear system
This system can be compactly written as
The standard smoother in this method is a blockJacobi preconditioned Richardson scheme with
(2) 
The overall method consists of presmoothing, a recursive coarse timegrid correction and postsmoothing. This Vcycle over the time meshes is guarantied to geometrically converge, as long as the application of is sufficiently well approximated and
is positive semidefinite with real eigenvalues.
The presented smoother is simultaneously parallel in space and time. Because all operations are a Kronecker product of a time operation and a spatial operation, all spatial operations can be executed with the usual spaceparallel approaches. On top of this also the time mesh can be subdivided among groups of processors. Only the application of requires forward communication for neighboring time nodes.
2.2 Spatial Coarsening
For performance reasons it is desirable to use not only a coarser timemesh for the coarse grid correction, but also a coarser spatial mesh. For example without spatial coarsening the coarse grid correction is responsible for approximately half the computational effort and is the less parallel part of the algorithm. In 3D the use of a coarser spatial mesh approximately reduces the effort for the coarse grid correction by a factor of almost doubling the overall speed of the method.
However various inhibitors can prevent proper convergence rates for spatial coarsening. The "degree of anisotropy" in the discrete problem (denoted by , coined in [14]) is very significant for the behavior of the method. Assuming properly working spatial solvers the degree of anisotropy is indeed the only potential inhibitor for the heat equation. In fact if is above a certain threshold and no other inhibitor is active, then spatial coarsening has no negative effect on the convergence rate. Therefore we use spatial coarsening only if the coarsening rule is fulfilled
with being the time step size and the spatial mesh size. The threshold depends on the used timestepping scheme, the spatial discretization method, the smoothing steps, the dimension and the underlying differential equation. This rule needs to be generalized for practical applications to generic parabolic PDEs with a generic differential operator
By a variable substitution argument we can account for the parameters and by defining a local degree of anisotropy we can account for variation of the values over the spatial domain. The final coarsening rule is defined as
(3) 
So for each element of the spatial mesh this condition needs to be satisfied for spatial coarsening. However in practice with piecewise constant parameters and other assumptions simplifications for checking this rule can be made as explained in [28]. To measure we decide to use the length of the longest edge of the tetrahedron, whereas equivalent choices would be reflected in the value of .
The determination of in each case can be done either by a local Fourier analysis (LFA) or by numerical experiments. Both of these approaches are demonstrated in this paper.
3 EddyCurrents
After the successful application of this scheme to the heat equation [7], we attempt to solve the eddycurrent equation
(4) 
under the regularization assumption to avoid dealing with differential algebraic equations for now. We also always use homogeneous Dirichlet (, on ) or Neumann (, on ) boundary conditions and a homogeneous initial condition (, for ). Applying an implicit Euler scheme and lowest order Nedelec elements [21] to the semivariational formulation
yields the discrete timestepping scheme (1), where the occuring matrices are defined by the basis of the Nedelec finite elements
3.1 Troubles with Spatial Coarsening
As is positive semidefinite and has real eigenvalues, it is not surprising, that the pure time multigrid method without spatial coarsening works reasonably well with convergence rates mostly below . For the idealized case of an exact solver for this rate is proven by inspecting the test equation (5) as in [23, ch.4.2]. However for the method as it is we come to the conclusion: A coarse grid correction with spatial coarsening is completely ineffective regardless of . This indicates another inhibitor for spatial coarsening, which will turn out to be the nontrivial kernel of the curl operator.
This puzzling finding made us question, if the implementation was really bugfree, while the explanation to this problem is in fact purely mathematical.
3.2 An Informal Explanation
This subsection explains the underlying mechanics in an intuitive, informal way. The basis for this explanation is the test equation
(5) 
The case is the worst case for the convergence of our time parallel scheme assuming some fixed step size. For the smoother converges fast without a coarse grid correction. The semidiscretized eddycurrent equation
can be diagonalized by the Eigenvectors of
, which leads with to the test equation with the eigenvalues in place of . This implies that the smoother itself will dampen error components with high eigenvalues. As shown by the local Fourier analysis for differential operators [37], high eigenvalues are associated with high spatial frequencies. Dampening those high spatial frequencies smooths the error, which enables spatial coarsening during the coarse grid correction.For simply connected domains, the eddycurrent equation can be viewed through the scope of the Helmholtz decomposition [39]
By inserting the solenoidal part () into the eddycurrent equation (4) without parameters and by using the identities , we get the resulting equation
Analogously inserting the nodal part () and using results in
(6) 
We can conclude from this, that the solenoidal part behaves like a heat equation and should be handled well by our method. In contrast the nodal part is equivalent to the worst case of the test equation 5. As a result high frequencies of the gradient part are practically not dampened by the smoother, thus there is no spatial smoothing at all, such that coarsening in space renders the coarse grid correction useless.
However equation (6) can be solved by just integrating the right hand side. Conceptually the error in the nodal component can be calculated by the operator
(7) 
The interpretation of this procedure is as follows.

: Calculate the residual.

: Eliminate the solenoidal part.

: Revert for the nodal part.

: Integrate the equation.
This can be seen as an expansion of the method [11] with an additional time component. In this notation the nodal auxiliary space correction by R. Hiptmayr would be denoted by
3.3 Nodal Auxiliary Space Correction
With the previous insights in mind we can properly implement the nodal auxiliary correction method. If Nedelec elements of any order are used, an appropriate conforming finite element space can be chosen according to [42]. We define as the discrete gradient matrix from the FEspace to the Nedelec FEspace and as the nodal Laplace stiffness matrix for the operator . If we also define as the operator, which sums up all previous and the current time step
then the correction can be expressed as
(8) 
In our notation the spatial operators above are applied to each time step separately, so we can omit the usual tensor products (
) for more clarity. Furthermore through a timemultigrid approach the already trivial operator could be made fully time parallel.4 Hybrid Smoother
The aforementioned auxiliary nodal correction (8) denoted by can be combined with the standard smoother (2) denoted by to form a hybrid smoother. Like matrices the application is read from right to left, so is one application of followed by an application of . We restrict ourselves to have the same smoother during pre and post smoothing with reversed order. The following local Fourier analysis indicates, that more than one application of in the hybrid smoother has no apparent benefits, therefore we restrict ourselves to using just once.
4.1 Local Fourier Analysis
The local Fourier analysis is based on the LFA for the 1D heat equation performed in [7] and extended with LFA symbols for the 2D curlcurl equation from [3]. The implementation is done in Mathematica [41]. The full methodology and more detailed results can be found in [28, chp. 5]. Additionally the nodal auxiliary correction is implemented. In the LFA we assume exact solutions to the spatial problems (implicit Euler step and Laplace problem), however the numerical experiments in chapter 5 show, that this is not required. Further we assume, that the coarse grid correction is solved exactly.
In Figure 1 we observe, how the addition of the auxiliary nodal correction introduces spatial smoothing, whereas the standard smoother features no spatial smoothing at all regardless of (3). This reaffirms our observations in Section 3.2 .
Consequently we measure the maximal spectral radius over all frequencies to get an upper bound for convergence rates. The three investigated methods use either coarsening in space ("SemiSpace"), in time ("SemiTime"), or both ("Full"). To relate to the coarsening rule, we vary over the horizontal axis. The results in Figure 2 confirm, that with the auxiliary smoother in place using a coarser spatial mesh is possible again. Furthermore we can observe, that the "Full" method behaves identical to the "SemiTime" method for big enough and deteriorates for lower . Therefore it makes sense to use the coarsening rule (3) with . Not only does the addition of allow spatial coarsening, but it also improves the convergence rates for "SemiTime", which would otherwise converge with a worse rate independent of .
Different orders like or typically perform slightly worse than , therefore we prefer putting in between . The convergence rates of or squared are approximately the same as for , which means that two Vcycles with have the same impact as one Vcycle with . This means, that in this setting is strictly inferior to . On the other end does not manage to square the convergence rate compared to (0.2 vs. 0.3 for low ) except for . In the case of low we use "SemiTime", so the coarse grid correction is responsible for half the computational effort and , meaning one Vcycle of is worth Vcycles of . If was used only on the finest grid, then this would mean approximately
times the computational effort. This idealized rough estimate could set
on par with in certain cases, maybe could also be a good choice. In conclusion we will focus on as a solid choice.5 Numerical Experiments
Our practical implementation is based on the implementation [7] and uses MFEM [19] and HYPRE [15] to deal with the arising spatial problems. The implementation is parallel in time, while everything except the auxiliary method is parallel in space, which was only prevented by time constraints. Therefore it is close to being parallel in space and time.
The modification of the existing multigrid framework to support a different spatial problem was seamless due to the nonintrusive design of the method. Specifically only the application of spatial prolongation/restriction, the discrete gradient matrix and functionality for
needs to be provided through an interface. For example swapping out the eddycurrent equation for a heat equation is a matter of minutes and matrixfree methods could be incorporated as well. The multigrid method itself fully relies on the interface for applying and solving spatial operators. The only hardcoded part is the used vector class, which could also be replaced by a templatebased approach. There are other methods which aim for even less intrusiveness [REF reduced integration], but we are confident, that for many preexisting applications the required interface could be implemented with ease.
5.1 Determining Convergence Rates and the Parameter
As we are not aware of any available local Fourier analysis for the 3D curlcurl equation, we recreate Figure 2 by using our implementation of the multigrid method with a smoother. In addition we want to showcase how a fitting can be determined with nothing but a working implementation. To determine convergence rates the power iteration method was applied to a test case. This means, that we apply up to 200 iterations of the method to a random initial guess (values between 0 and 1) with a right hand side. During this process we measure the reduction of the norm of the vector, which converges to the spectral radius of the method. The maximal occurring error reduction factor is considered as convergence rate, because these factors are almost monotonically increasing. In turn means, that our method has no worse convergence during a preasymptotic phase. Because we use no rescaling, the iteration is stopped, if the total error reduction gets below to avoid a double underflow.
We compare the time multigrid method without coarsening in space ("Semitime") with a variant, which uses just two levels of the spatial mesh and uses full coarsening on the highest level ("Full") as in Table 1. The reason is that the local Fourier analysis assumes the coarse grid correction to be solved exactly (twogrid cycle), so we mimic this by not using further spatial coarsening within the correction. For "Full" we want to use the spatial coarsening right on the highest level, because potentially bad convergence rates on lower levels are obscured on the highest level, where the convergence rates are measured.
Space/Time level  9  8  7  6  5  4  3  2  1  0 
2  x  
1  x  x  x  x  x  x  x  x  x  
0 
The basic mesh consists of two tetrahedra, which are uniformly refined several times. The problem features homogeneous Neumann boundary conditions. We compare the results for
uniform refinements, so we can observe convergent behavior with an increasingly fine mesh. The spatial degrees of freedom range from 230 to 630240, with 512 time steps. The material parameters
are set to 1 and is varied by using different values for .For solving the spatial problems we rely on 3 iterations of the preconditioned conjugate gradient method. The preconditioners are algebraic multigrid methods from HYPRE [15]. BoomerAMG was used for the Laplace problem and AMS for . Quite remarkably, turning off the gradientsmoothing in AMS has no impact on convergence, because the gradient part is handled by the auxiliary correction. At the same time this cuts down total computation times by roughly .
Our main goal is to confirm, that the method behaves similar to the local Fourier analysis for the 2D curlcurl equation. Furthermore we want to determine the value of in this setting by observing, where the two methods "Full" and "Semitime" start to differ.
In Figure 3 we can observe that the important parts of the curves converge with an increasing number of refinements. The only nonconvergent feature is the transition from the high region with a convergence rate of to the plateau of the "Semitime" method with a convergence rate of .
This means, that our method performs better than expected in this regard. On a side note the smoother on its own shifts to the right in the same fashion and is a suitable solver for sufficiently high and small problems. In the case of the heat equation the smoother never converges on its own and none of the shifting happens.
Figure 4 omits all cases except the most refined one for the sake of a less cluttered plot. The point where the "Full" method departs from the "Semitime" method can be located at 0.1. When the diagonal line is extrapolated, it even appears to be around 0.075. This means that a choice of will safely guarantee the optimal coarsening strategy to be selected. This holds for the case of the 3D eddycurrent equation, lowest order Nedelec elements, implicit Euler time stepping, smoother, longest edge.
5.2 Coarsening Rule for Locally Different Parameters
In practical applications vastly different material parameters are very common. For example the ratio can differ by a factor of between iron and copper, while between iron and weak conductors this gap can be way bigger. The limit case of perfect insulators () would even cause an infinitely large local degree of anisotropy . This motivates the question, if the local nature of the coarsening rule (3) can indeed handle different materials. On a side note we are not concerned, if the preconditioner can handle jumping coefficients, because our spacetime parallel method can use any suitable method to solve the spatial problems.
We assign two different material parameters to the two original tetrahedra of the test mesh. After uniform refinements two regions with two distinct degrees of anisotropy are given. As our method mostly benefits from increasing , it is sufficient to only deal with the smallest as reflected in the coarsening rule (3).
In contrast the method in [14] chooses to use semi coarsening in time if and semi coarsening in space if . If this rule is violated both semi coarsening in time (when ) and semi coarsening in space (when ) yield bad convergence rates (). Therefore we are concerned, that in the case of two materials with neither coarsening approach would yield useful convergence rates. Vastly different can also arise during adaptive refinements, where the size of the elements can differ significantly.
For this "Two " test case we use mostly the same setup as for the previous test case, but we only investigate the "Full" method, only use 4 uniform refinements, have a maximum of 100 iterations and vary on one of the two subdomains. This has no effect on the minimal , as it only increases . The minimal is again manipulated through changing .
0.01  

1  0.121  0.19  0.23  0.252  0.372  0.533  0.688  0.77 
0.083  0.159  0.213  0.242  0.34  0.498  0.662  0.753  
0.083  0.158  0.212  0.242  0.34  0.5  0.664  0.755  
0.083  0.158  0.212  0.242  0.34  0.5  0.665  0.756  
0.083  0.158  0.212  0.243  0.34  0.501  0.665  0.756  
0.083  0.157  0.212  0.243  0.34  0.501  0.665  0.756  
0.083  0.157  0.213  0.242  0.34  0.501  0.665  0.756 
6 Parallel Performance
To investigate the parallel performance of the method, we perform a weak scaling test with respect to time. In contrast to other weak scaling tests we expand the time domain instead of refining it. This is done to keep the degree of anisotropy constant at , which is allows for instant spatial coarsening. For comparison we also run a test series without spatial coarsening.

Spatial domain , 7768 DoF

timeparallel processors

Time domain time steps with size

Constant


Material parameters = 1

Residual reduction tolerance

SAS smoother
The "Full" method always finished in 12 iterations, while the "SemiTime" method needed 14 iterations. Without some of the idealizations assumed in the analysis it is very well possible, that the "Full" method has a marginally better convergence rate than the "SemiTime" method. The corresponding runtimes can be observed in Table 3. The Adjusted Ratio is the ratio of the runtimes multiplied by to adjust for the iterations.
#Processors  1  2  4  8  16  32  64  128  256  512 

Full  35.3  35.3  35.4  39.3  49.8  49.7  50.2  50.2  50.4  52.1 
SemiTime  74.6  75.8  77.4  87.6  112.3  114.1  115.6  117.6  118.9  120.8 
Adjusted Ratio  0.552  0.543  0.534  0.523  0.517  0.508  0.507  0.498  0.495  0.503 
We can clearly observe how the "Full" method scales better with barely any increase in computation times in the upper range. The sudden increase around the 16 cores mark can be most likely attributed to the architecture of the cluster with a node size of 16 cores.
It would have been interesting to run the scaling tests on even bigger clusters, where possibly a more severe discrepancy between the two methods would surface.
7 Conclusion
This paper demonstrates the difficulties in solving the eddycurrent equation with a spacetime multigrid method using also spatial coarsening. The nontrivial kernel of the curloperator can be identified as the cause of these problems, and a nodal auxiliary space correction with time integration is devised to restore good convergence rates. This procedure is analogous to the method in [11], but with an additional time component.
The local Fourier analysis of the original and the modified method confirms the effectiveness of the auxiliary correction, and in fact lead the authors of this paper to devise the auxiliary correction. From the gained insights, we expect similar problems to occur whenever the spatial operator features a nontrivial kernel. Furthermore, we expect other spacetime multigrid methods to experience similar problems and to profit from an implementation of the described universal correction (7). Even multigrid methods without spatial coarsening could benefit from this modification, because it also improves the convergence rates of our pure timemultigrid method.
We proceed to determine convergence rates and with numerical tests. This result holds for the 3D eddycurrent equation and one specific configuration of the method. The coarsening rule passes one basic test for robustness, which involves solving test problems with vastly different local degrees of anisotropy .
We finally demonstrate with the help of weak scaling tests, how spatial coarsening can improve parallel performance.
In conclusion the explored "nodal auxiliary space correction" is a highly useful tool to enhance spacetime multigrid methods for the eddycurrent equation.
Acknowledgment
The research was funded by the Austrian Science Fund (FWF) under the grant W1214N15, project DK8. We further want to thank the RICAM institute for enabling computations on the RADON1 cluster and the people associated with the NUMA institute for helpful comments and insights. We want to especially thank Ulrich Langer for fruitful discussions. Thank also goes to Bianca Klammer for additional proofreading.
References
 [1] (2017) A multigrid perspective on the parallel full approximation scheme in space and time. Numer. Linear Algebra Appl. 24 (6), pp. e2110, 24. External Links: ISSN 10705325, Document, Link, MathReview (Quoc Thong Le Gia) Cited by: §1.
 [2] (2018) Asymptotic convergence of the parallel full approximation scheme in space and time for linear problems. Numer. Linear Algebra Appl. 25 (6), pp. e2208, 22. External Links: ISSN 10705325, Document, Link, MathReview (Benjamin WiLian Ong) Cited by: §1.
 [3] (2008) Local Fourier analysis of multigrid for the curlcurl equation. SIAM J. Sci. Comput. 30 (4), pp. 1730–1755. External Links: ISSN 10648275, Document, Link, MathReview (Marco Donatelli) Cited by: §4.1.
 [4] (2017) Twolevel convergence theory for multigrid reduction in time (MGRIT). SIAM J. Sci. Comput. 39 (5), pp. S501–S527. External Links: ISSN 10648275, Document, Link, MathReview (Benjamin WiLian Ong) Cited by: §1.
 [5] (2014) Parallel time integration with multigrid. SIAM J. Sci. Comput. 36 (6), pp. C635–C661. External Links: ISSN 10648275, Document, Link, MathReview (Bruno Carpentieri) Cited by: §1.
 [6] (2017) Multigrid reduction in time for nonlinear parabolic problems: a case study. SIAM J. Sci. Comput. 39 (5), pp. S298–S322. External Links: ISSN 10648275, Document, Link, MathReview Entry Cited by: §1.
 [7] (2016) Analysis of a new spacetime parallel multigrid algorithm for parabolic problems. SIAM J. Sci. Comput. 38 (4), pp. A2173–A2208. External Links: ISSN 10648275, Document, Link, MathReview (Anaïs Crestetto) Cited by: §1, §2.1, §3, §4.1, §5.
 [8] (2015) 50 years of time parallel time integration. In Multiple shooting and time domain decomposition methods, Contrib. Math. Comput. Sci., Vol. 9, pp. 69–113. External Links: MathReview Entry Cited by: §1.
 [9] (2018) A nonintrusive parallelintime adjoint solver with the XBraid library. Comput. Vis. Sci. 19 (34), pp. 85–95. External Links: ISSN 14329360, Document, Link, MathReview Entry Cited by: §1.
 [10] (1984) Parabolic multigrid methods. In Computing methods in applied sciences and engineering, VI (Versailles, 1983), pp. 189–197. External Links: MathReview Entry Cited by: §1.
 [11] (1999) Multigrid method for Maxwell’s equations. SIAM J. Numer. Anal. 36 (1), pp. 204–225. External Links: ISSN 00361429, Document, Link, MathReview (Jacques Rappaz) Cited by: §3.2, §7.
 [12] (2018) Timemultipatch discontinuous Galerkin spacetime isogeometric analysis of parabolic evolution problems. Electron. Trans. Numer. Anal. 49, pp. 126–150. External Links: Document, Link, MathReview (H. T. Lau) Cited by: §1.

[13]
(1995)
An algorithm with polylog parallel complexity for solving parabolic partial differential equations
. SIAM J. Sci. Comput. 16 (3), pp. 531–541. External Links: ISSN 10648275, Document, Link, MathReview Entry Cited by: §1.  [14] (1995) A spacetime multigrid method for parabolic partial differential equations. SIAM J. Sci. Comput. 16 (4), pp. 848–864. External Links: ISSN 10648275, Document, Link, MathReview (E. G. Dyakonov) Cited by: §1, §2.2, §5.2.
 [15] (2018) HYPRE: scalable linear solvers and multigrid methods. Note: https://computing.llnl.gov/projects/hyprescalablelinearsolversmultigridmethods Cited by: §5.1, §5.
 [16] (2015) Numerical simulation of skin transport using Parareal. Comput. Vis. Sci. 17 (2), pp. 99–108. External Links: ISSN 14329360, Document, Link, MathReview Entry Cited by: §1.
 [17] (201607) A parallel multigrid reduction in time method for power systems. In 2016 IEEE Power and Energy Society General Meeting (PESGM), Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §1.
 [18] (1987) Multigrid dynamic iteration for parabolic equations. BIT 27 (2), pp. 216–234. External Links: ISSN 00063835, Document, Link, MathReview (Randall J. LeVeque) Cited by: §1.
 [19] MFEM: modular finite element methods library. Note: mfem.org External Links: Document Cited by: §5.
 [20] (2015) Interweaving PFASST and parallel multigrid. SIAM J. Sci. Comput. 37 (5), pp. S244–S263. External Links: ISSN 10648275, Document, Link, MathReview Entry Cited by: §1.
 [21] (2003) Finite element methods for Maxwell’s equations. Numerical Mathematics and Scientific Computation, Oxford University Press, New York. External Links: ISBN 0198508883, Document, Link, MathReview (Per Lötstedt) Cited by: §3.
 [22] (2018) A fully parallelizable spacetime multilevel Monte Carlo method for stochastic differential equations with additive noise. SIAM J. Sci. Comput. 40 (3), pp. C388–C400. External Links: ISSN 10648275, Document, Link, MathReview (Mikhail Urusov) Cited by: §1.
 [23] (2013) Spacetime methods: fast solvers and applications. Ph.D. Thesis, Graz University of Technology. Cited by: §3.1.
 [24] (1964) Parallel methods for integrating ordinary differential equations. Comm. ACM 7, pp. 731–733. External Links: ISSN 00010782, Document, Link, MathReview Entry Cited by: §1.
 [25] (20130916) Parareal for diffusion problems with space and timedependent coefficients. In Domain Decomposition Methods in Science and Engineering XXII, Lecture Notes in Computational Science and Engineering, Vol. 104, pp. 3–10. External Links: Link Cited by: §1.
 [26] (2016) Spectral deferred corrections with fastwave slowwave splitting. SIAM J. Sci. Comput. 38 (4), pp. A2535–A2557. External Links: ISSN 10648275, Document, Link, MathReview (Bülent Karasözen) Cited by: §1.
 [27] (201808) Parallelintime solution of power systems with scheduled events. In 2018 IEEE Power Energy Society General Meeting (PESGM), Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §1.
 [28] (2018) A spacetime parallel multigrid method for the transient eddycurrent equation. Master’s Thesis, JKU Linz. Note: https://www.numa.unilinz.ac.at/Teaching/Diplom/Finished/schwalsberger External Links: Link Cited by: §2.1, §2.2, §4.1, §5.2.
 [29] (201211) A massively spacetime parallel nbody solver. In SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, Vol. , pp. 1–11. External Links: Document, ISSN Cited by: §1.
 [30] (20130910) A spacetime parallel solver for the threedimensional heat equation. In Parallel Computing: Accelerating Computational Science and Engineering (CSE), Advances in Parallel Computing, Vol. 25, pp. 263 – 272. External Links: Document, Link Cited by: §1.
 [31] (2015) A multilevel spectral deferred correction method. BIT 55 (3), pp. 843–867. External Links: ISSN 15729125, Document, Link Cited by: §1.
 [32] (20120625) Integrating an NBody Problem with SDC and PFASST. Lecture Notes in Computational Science and Engineering, Vol. 98, pp. 637 – 645. External Links: ISBN 9783319057880 (print), Document, Link Cited by: §1.
 [33] (20130916) Inexact spectral deferred corrections. In Domain Decomposition Methods in Science and Engineering XXII, Lecture Notes in Computational Science and Engineering, Vol. 104, pp. 127–133. External Links: Link Cited by: §1.
 [34] (2017) Toward faulttolerant parallelintime integration with PFASST. Parallel Comput. 62, pp. 20–37. External Links: ISSN 01678191, Document, Link, MathReview Entry Cited by: §1.
 [35] (2018) Parallelizing spectral deferred corrections across the method. Comput. Vis. Sci. 19 (34), pp. 75–83. External Links: ISSN 14329360, Document, Link, MathReview Entry Cited by: §1.
 [36] (20130826) Convergence of Parareal for the NavierStokes Equations Depending on the Reynolds Number. In Numerical Mathematics and Advanced Applications  ENUMATH 2013, Lecture Notes in Computational Science and Engineering, Vol. 103, Cham, pp. 195 – 202. External Links: ISBN 9783319107042 (print), Document, Link Cited by: §1.
 [37] (2001) Multigrid. Academic Press, Inc., San Diego, CA. Note: With contributions by A. Brandt, P. Oswald and K. Stüben External Links: ISBN 012701070X, MathReview (Dietrich Braess) Cited by: §3.2.
 [38] (1993) Parallel multigrid waveform relaxation for parabolic problems. Teubner Skripten zur Numerik. [Teubner Scripts on Numerical Mathematics], B. G. Teubner, Stuttgart. External Links: ISBN 3519027178, Document, Link, MathReview (Erwin Schechter) Cited by: §1.
 [39] (1858) Über Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen. J. Reine Angew. Math. 55 (3), pp. 25–55. External Links: ISSN 00754102 Cited by: §3.2.
 [40] (2015) A highorder Boris integrator. J. Comput. Phys. 295, pp. 456–474. External Links: ISSN 00219991, Document, Link, MathReview (Vasile Drăgan) Cited by: §1.
 [41] Mathematica, Version 12.0. Note: Champaign, IL, 2019 External Links: Link Cited by: §4.1.
 [42] (2006) High order finite element methods for electromagnetic field computation. Ph.D. Thesis, JKU Linz. Note: https://www.numa.unilinz.ac.at/Teaching/PhD/Finished/zaglmayr.de.shtml External Links: Link Cited by: §3.3.