Log In Sign Up

A parallel space-time multigrid method for the eddy-current equation

We expand the applicabilities and capabilities of an already existing space-time parallel method based on a block Jacobi smoother. First we formulate a more detailed criterion for spatial coarsening, which enables the method to deal with unstructured meshes and varying material parameters. Further we investigate the application to the eddy-current equation, where the non-trivial kernel of the curl operator causes severe problems. This is remedied with a new nodal auxiliary space correction. We proceed to identify convergence rates by local Fourier analysis and numerical experiments. Finally, we present a numerical experiment which demonstrates its excellent scaling properties.


page 1

page 2

page 3

page 4


Local Fourier Analysis of a Space-Time Multigrid Method for DG-SEM for the Linear Advection Equation

In this paper we present a Local Fourier Analysis of a space-time multig...

Space-time discontinuous Galerkin approximation of acoustic waves with point singularities

We develop a convergence theory of space-time discretizations for the li...

Simplex space-time meshes in engineering applications with moving domains

This paper highlights how unstructured space-time meshes can be used in ...

Semi-analytic integration for a parallel space-time boundary element method modeling the heat equation

The presented paper concentrates on the boundary element method (BEM) fo...

A space-time isogeometric method for the partial differential-algebraic system of Biot's poroelasticity model

Biot's equations of poroelasticity contain a parabolic system for the ev...

Simulating dense granular flow using the μ(I)-rheology within a space-time framework

A space-time framework is applied to simulate dense granular flow. Two d...

Space-time block preconditioning for incompressible flow

Parallel-in-time methods have become increasingly popular in the simulat...

1 Introduction

Around 2004 processor speed ceased to increase due to physical limits. Therefore we have experienced exponential growth in the number of processing units instead. This manifests in modern processing clusters featuring 1000 to more than ten million CPUs as of 2019444, accessed 2019

. Meanwhile applications that are parallel in space frequently fail to efficiently utilize more than a few thousand CPUs (depending on problem size). To utilize this processing power massively parallel algorithms are required. For time dependent simulations the time dimension offers potential for further parallelization, however the classical time-stepping concept is fundamentally sequential. Primarily for parabolic evolution equations time-parallel methods have been developed to break this "curse of sequentiality". After the first concept for ordinary differential equations more than 55 years ago

[24], a whole family of time-parallel methods of all kinds has blossomed, see [8] for a historical overview.

Despite still being sequential in time the method [10] by Hackbusch laid some groundwork for time-parallel multigrid schemes and defied the notion of exactly solving the problem forward in time. Lubich and Ostermann took a different approach by presenting a waveform relaxation type space-time multigrid method [18]. An in depth discussion of waveform relaxation methods can be found in the monograph [38]. A further variant described as space-time concurrent multigrid waveform relaxation with cyclic reduction can be found in [13].

A contemporary method similar to ours is multigrid reduction in time [5] which has been actively expanded by a team centered around the Lawrence Livermore National Laboratory [5, 4, 6, 17, 9, 27]. Further results can be found on the XBraid project Another very active group is the Jülich Forschungszentrum with diverse publications about time-parallel methods [29, 30, 32, 31, 36, 40, 25, 33, 20, 16, 26, 34, 1, 35, 2].

A method, which can be described as the precursor to the next method, is space-time multigrid with point-wise relaxation [14]. The method used in this paper [7] replaces the point-wise relaxation with block relaxation. Thus it can be described as space-time multigrid with block relaxation. Recent publications demonstrated that this method can also be applied to stochastic differential equations [22], and in combination with isogeometric analysis methods [12] . This motivated us to investigate the application of our method to the eddy-current equation. Furthermore we generalize concepts to expand our scope towards more complex problems.

The eddy-current equations are used to study low frequency electromagnetic applications, like electric motors/generators, induction furnaces, non-destructive testing, and many more. In practice, time-harmonic approaches are often preferred over transient models due to lower computational costs. However time-harmonic approaches struggle with nonlinear materials (saturation effects) and moving geometry. As a stepping stone to these more complex problems, we only consider the linear case with constant geometry in this paper.

The remainder of this paper is organized as follows. In Section 2 the Space-time-parallel method is introduced and the concept of a coarsening rule is presented and expanded. Section 3 applies the method to the eddy-current equation and pinpoints the core problem of the naive approach. We continue to determine a remedy to this problem - namely a "nodal auxiliary space correction"-, which can be used to define a hybrid smoother. In Section 4, we perform a local Fourier analysis of the introduced hybrid smoother for the 2D curl-curl equation. In Section 5, we mimic the local Fourier analysis with numerical experiments for the 3D eddy-current equation. We further investigate the validity of a new generalized coarsening rule. Weak scaling results of the improved method are given in Section 6. Conclusions are drawn in Section 7 and we discuss possible implications of our findings for related models and methods.

2 The Method

2.1 The Space-Time Parallel Method

For simplicity we reiterate the space-time parallel multigrid method presented in [7] and [28] only for the case of the implicit Euler scheme with uniform step width . It should be noted, that we are not concerned with the approximation quality, but we only want to solve a given discretized problem. After the usual discretization steps for a PDE in space and time (see Chapter 3 for an example) we need to solve an equation of the form


By defining we can rewrite the time-stepping scheme as a huge linear system

This system can be compactly written as

The standard smoother in this method is a block-Jacobi preconditioned Richardson scheme with


The overall method consists of pre-smoothing, a recursive coarse time-grid correction and post-smoothing. This V-cycle over the time meshes is guarantied to geometrically converge, as long as the application of is sufficiently well approximated and

is positive semidefinite with real eigenvalues.

The presented smoother is simultaneously parallel in space and time. Because all operations are a Kronecker product of a time operation and a spatial operation, all spatial operations can be executed with the usual space-parallel approaches. On top of this also the time mesh can be subdivided among groups of processors. Only the application of requires forward communication for neighboring time nodes.

2.2 Spatial Coarsening

For performance reasons it is desirable to use not only a coarser time-mesh for the coarse grid correction, but also a coarser spatial mesh. For example without spatial coarsening the coarse grid correction is responsible for approximately half the computational effort and is the less parallel part of the algorithm. In 3D the use of a coarser spatial mesh approximately reduces the effort for the coarse grid correction by a factor of almost doubling the overall speed of the method.

However various inhibitors can prevent proper convergence rates for spatial coarsening. The "degree of anisotropy" in the discrete problem (denoted by , coined in [14]) is very significant for the behavior of the method. Assuming properly working spatial solvers the degree of anisotropy is indeed the only potential inhibitor for the heat equation. In fact if is above a certain threshold and no other inhibitor is active, then spatial coarsening has no negative effect on the convergence rate. Therefore we use spatial coarsening only if the coarsening rule is fulfilled

with being the time step size and the spatial mesh size. The threshold depends on the used time-stepping scheme, the spatial discretization method, the smoothing steps, the dimension and the underlying differential equation. This rule needs to be generalized for practical applications to generic parabolic PDEs with a generic differential operator

By a variable substitution argument we can account for the parameters and by defining a local degree of anisotropy we can account for variation of the values over the spatial domain. The final coarsening rule is defined as


So for each element of the spatial mesh this condition needs to be satisfied for spatial coarsening. However in practice with piecewise constant parameters and other assumptions simplifications for checking this rule can be made as explained in [28]. To measure we decide to use the length of the longest edge of the tetrahedron, whereas equivalent choices would be reflected in the value of .

The determination of in each case can be done either by a local Fourier analysis (LFA) or by numerical experiments. Both of these approaches are demonstrated in this paper.

3 Eddy-Currents

After the successful application of this scheme to the heat equation [7], we attempt to solve the eddy-current equation


under the regularization assumption to avoid dealing with differential algebraic equations for now. We also always use homogeneous Dirichlet (,  on ) or Neumann (,  on ) boundary conditions and a homogeneous initial condition (,   for ). Applying an implicit Euler scheme and lowest order Nedelec elements [21] to the semi-variational formulation

yields the discrete time-stepping scheme (1), where the occuring matrices are defined by the basis of the Nedelec finite elements

3.1 Troubles with Spatial Coarsening

As is positive semidefinite and has real eigenvalues, it is not surprising, that the pure time multigrid method without spatial coarsening works reasonably well with convergence rates mostly below . For the idealized case of an exact solver for this rate is proven by inspecting the test equation (5) as in [23, ch.4.2]. However for the method as it is we come to the conclusion: A coarse grid correction with spatial coarsening is completely ineffective regardless of . This indicates another inhibitor for spatial coarsening, which will turn out to be the non-trivial kernel of the curl operator.

This puzzling finding made us question, if the implementation was really bug-free, while the explanation to this problem is in fact purely mathematical.

3.2 An Informal Explanation

This subsection explains the underlying mechanics in an intuitive, informal way. The basis for this explanation is the test equation


The case is the worst case for the convergence of our time parallel scheme assuming some fixed step size. For the smoother converges fast without a coarse grid correction. The semi-discretized eddy-current equation

can be diagonalized by the Eigenvectors of

, which leads with to the test equation with the eigenvalues in place of . This implies that the smoother itself will dampen error components with high eigenvalues. As shown by the local Fourier analysis for differential operators [37], high eigenvalues are associated with high spatial frequencies. Dampening those high spatial frequencies smooths the error, which enables spatial coarsening during the coarse grid correction.

For simply connected domains, the eddy-current equation can be viewed through the scope of the Helmholtz decomposition [39]

By inserting the solenoidal part () into the eddy-current equation (4) without parameters and by using the identities , we get the resulting equation

Analogously inserting the nodal part () and using results in


We can conclude from this, that the solenoidal part behaves like a heat equation and should be handled well by our method. In contrast the nodal part is equivalent to the worst case of the test equation 5. As a result high frequencies of the gradient part are practically not dampened by the smoother, thus there is no spatial smoothing at all, such that coarsening in space renders the coarse grid correction useless.

However equation (6) can be solved by just integrating the right hand side. Conceptually the error in the nodal component can be calculated by the operator


The interpretation of this procedure is as follows.

  • : Calculate the residual.

  • : Eliminate the solenoidal part.

  • : Revert for the nodal part.

  • : Integrate the equation.

This can be seen as an expansion of the method [11] with an additional time component. In this notation the nodal auxiliary space correction by R. Hiptmayr would be denoted by

3.3 Nodal Auxiliary Space Correction

With the previous insights in mind we can properly implement the nodal auxiliary correction method. If Nedelec elements of any order are used, an appropriate conforming finite element space can be chosen according to [42]. We define as the discrete gradient matrix from the FE-space to the Nedelec FE-space and as the nodal Laplace stiffness matrix for the operator . If we also define as the operator, which sums up all previous and the current time step

then the correction can be expressed as


In our notation the spatial operators above are applied to each time step separately, so we can omit the usual tensor products (

) for more clarity. Furthermore through a time-multigrid approach the already trivial operator could be made fully time parallel.

4 Hybrid Smoother

The aforementioned auxiliary nodal correction (8) denoted by can be combined with the standard smoother (2) denoted by to form a hybrid smoother. Like matrices the application is read from right to left, so is one application of followed by an application of . We restrict ourselves to have the same smoother during pre- and post smoothing with reversed order. The following local Fourier analysis indicates, that more than one application of in the hybrid smoother has no apparent benefits, therefore we restrict ourselves to using just once.

4.1 Local Fourier Analysis

The local Fourier analysis is based on the LFA for the 1D heat equation performed in [7] and extended with LFA symbols for the 2D curl-curl equation from [3]. The implementation is done in Mathematica [41]. The full methodology and more detailed results can be found in [28, chp. 5]. Additionally the nodal auxiliary correction is implemented. In the LFA we assume exact solutions to the spatial problems (implicit Euler step and Laplace problem), however the numerical experiments in chapter 5 show, that this is not required. Further we assume, that the coarse grid correction is solved exactly.

Figure 1: Comparison between the hybrid smoother and the standard smoother . Spectral radius for each frequency is plotted with , . Only the hybrid smoother has spatial smoothing properties.

In Figure 1 we observe, how the addition of the auxiliary nodal correction introduces spatial smoothing, whereas the standard smoother features no spatial smoothing at all regardless of (3). This reaffirms our observations in Section 3.2 .

Consequently we measure the maximal spectral radius over all frequencies to get an upper bound for convergence rates. The three investigated methods use either coarsening in space ("Semi-Space"), in time ("Semi-Time"), or both ("Full"). To relate to the coarsening rule, we vary over the horizontal axis. The results in Figure 2 confirm, that with the auxiliary smoother in place using a coarser spatial mesh is possible again. Furthermore we can observe, that the "Full" method behaves identical to the "Semi-Time" method for big enough and deteriorates for lower . Therefore it makes sense to use the coarsening rule (3) with . Not only does the addition of allow spatial coarsening, but it also improves the convergence rates for "Semi-Time", which would otherwise converge with a worse rate independent of .

Different orders like or typically perform slightly worse than , therefore we prefer putting in between . The convergence rates of or squared are approximately the same as for , which means that two V-cycles with have the same impact as one V-cycle with . This means, that in this setting is strictly inferior to . On the other end does not manage to square the convergence rate compared to (0.2 vs. 0.3 for low ) except for . In the case of low we use "Semi-Time", so the coarse grid correction is responsible for half the computational effort and , meaning one V-cycle of is worth V-cycles of . If was used only on the finest grid, then this would mean approximately

times the computational effort. This idealized rough estimate could set

on par with in certain cases, maybe could also be a good choice. In conclusion we will focus on as a solid choice.

Figure 2: Convergence rates for the two-grid cycle with the smoother as a function of .

5 Numerical Experiments

Our practical implementation is based on the implementation [7] and uses MFEM [19] and HYPRE [15] to deal with the arising spatial problems. The implementation is parallel in time, while everything except the auxiliary method is parallel in space, which was only prevented by time constraints. Therefore it is close to being parallel in space and time.

The modification of the existing multigrid framework to support a different spatial problem was seamless due to the non-intrusive design of the method. Specifically only the application of spatial prolongation/restriction, the discrete gradient matrix and functionality for

needs to be provided through an interface. For example swapping out the eddy-current equation for a heat equation is a matter of minutes and matrix-free methods could be incorporated as well. The multigrid method itself fully relies on the interface for applying and solving spatial operators. The only hard-coded part is the used vector class, which could also be replaced by a template-based approach. There are other methods which aim for even less intrusiveness [REF reduced integration], but we are confident, that for many preexisting applications the required interface could be implemented with ease.

5.1 Determining Convergence Rates and the Parameter

As we are not aware of any available local Fourier analysis for the 3D curl-curl equation, we recreate Figure 2 by using our implementation of the multigrid method with a smoother. In addition we want to showcase how a fitting can be determined with nothing but a working implementation. To determine convergence rates the power iteration method was applied to a test case. This means, that we apply up to 200 iterations of the method to a random initial guess (values between 0 and 1) with a right hand side. During this process we measure the reduction of the norm of the vector, which converges to the spectral radius of the method. The maximal occurring error reduction factor is considered as convergence rate, because these factors are almost monotonically increasing. In turn means, that our method has no worse convergence during a pre-asymptotic phase. Because we use no rescaling, the iteration is stopped, if the total error reduction gets below to avoid a double underflow.

We compare the time multigrid method without coarsening in space ("Semi-time") with a variant, which uses just two levels of the spatial mesh and uses full coarsening on the highest level ("Full") as in Table 1. The reason is that the local Fourier analysis assumes the coarse grid correction to be solved exactly (two-grid cycle), so we mimic this by not using further spatial coarsening within the correction. For "Full" we want to use the spatial coarsening right on the highest level, because potentially bad convergence rates on lower levels are obscured on the highest level, where the convergence rates are measured.

Space/Time level 9 8 7 6 5 4 3 2 1 0
2 x
1 x x x x x x x x x
Table 1: Coarsening strategy example for the "Full" method test case.

The basic mesh consists of two tetrahedra, which are uniformly refined several times. The problem features homogeneous Neumann boundary conditions. We compare the results for

uniform refinements, so we can observe convergent behavior with an increasingly fine mesh. The spatial degrees of freedom range from 230 to 630240, with 512 time steps. The material parameters

are set to 1 and is varied by using different values for .

For solving the spatial problems we rely on 3 iterations of the preconditioned conjugate gradient method. The preconditioners are algebraic multigrid methods from HYPRE [15]. BoomerAMG was used for the Laplace problem and AMS for . Quite remarkably, turning off the gradient-smoothing in AMS has no impact on convergence, because the gradient part is handled by the auxiliary correction. At the same time this cuts down total computation times by roughly .

Our main goal is to confirm, that the method behaves similar to the local Fourier analysis for the 2D curl-curl equation. Furthermore we want to determine the value of in this setting by observing, where the two methods "Full" and "Semi-time" start to differ.

Figure 3: Convergence rates for the two-grid cycle with smoother as a function of . Comparision between "Full" and "Semi-time" for different numbers of uniform refinements.
Figure 4: Convergence rates for the two-grid cycle with smoother as a function of . Only the most refined cases with 630240 DoF in space. determined as .

In Figure 3 we can observe that the important parts of the curves converge with an increasing number of refinements. The only non-convergent feature is the transition from the high region with a convergence rate of to the plateau of the "Semi-time" method with a convergence rate of . This means, that our method performs better than expected in this regard. On a side note the smoother on its own shifts to the right in the same fashion and is a suitable solver for sufficiently high and small problems. In the case of the heat equation the smoother never converges on its own and none of the shifting happens.

Figure 4 omits all cases except the most refined one for the sake of a less cluttered plot. The point where the "Full" method departs from the "Semi-time" method can be located at 0.1. When the diagonal line is extrapolated, it even appears to be around 0.075. This means that a choice of will safely guarantee the optimal coarsening strategy to be selected. This holds for the case of the 3D eddy-current equation, lowest order Nedelec elements, implicit Euler time stepping, smoother, longest edge.

5.2 Coarsening Rule for Locally Different Parameters

In practical applications vastly different material parameters are very common. For example the ratio can differ by a factor of between iron and copper, while between iron and weak conductors this gap can be way bigger. The limit case of perfect insulators () would even cause an infinitely large local degree of anisotropy . This motivates the question, if the local nature of the coarsening rule (3) can indeed handle different materials. On a side note we are not concerned, if the preconditioner can handle jumping coefficients, because our space-time parallel method can use any suitable method to solve the spatial problems.

We assign two different material parameters to the two original tetrahedra of the test mesh. After uniform refinements two regions with two distinct degrees of anisotropy are given. As our method mostly benefits from increasing , it is sufficient to only deal with the smallest as reflected in the coarsening rule (3).

In contrast the method in [14] chooses to use semi coarsening in time if and semi coarsening in space if . If this rule is violated both semi coarsening in time (when ) and semi coarsening in space (when ) yield bad convergence rates (). Therefore we are concerned, that in the case of two materials with neither coarsening approach would yield useful convergence rates. Vastly different can also arise during adaptive refinements, where the size of the elements can differ significantly.

For this "Two " test case we use mostly the same setup as for the previous test case, but we only investigate the "Full" method, only use 4 uniform refinements, have a maximum of 100 iterations and vary on one of the two subdomains. This has no effect on the minimal , as it only increases . The minimal is again manipulated through changing .

1 0.121 0.19 0.23 0.252 0.372 0.533 0.688 0.77
0.083 0.159 0.213 0.242 0.34 0.498 0.662 0.753
0.083 0.158 0.212 0.242 0.34 0.5 0.664 0.755
0.083 0.158 0.212 0.242 0.34 0.5 0.665 0.756
0.083 0.158 0.212 0.243 0.34 0.501 0.665 0.756
0.083 0.157 0.212 0.243 0.34 0.501 0.665 0.756
0.083 0.157 0.213 0.242 0.34 0.501 0.665 0.756
Table 2: Maximum error reduction factor for the "Two " test case using the "Full" method. 10744 DoF in space. After an initial boost to convergence, the rates remain stable for increasing . Again appears to be located between 0.06125 and 0.125.

We can observe in Table 2, that only the minimal has a notable impact on convergence, while increasing the bigger can only improve the convergence rate. With this and previous results [28] our method is one step closer to real applications with varying materials.

6 Parallel Performance

To investigate the parallel performance of the method, we perform a weak scaling test with respect to time. In contrast to other weak scaling tests we expand the time domain instead of refining it. This is done to keep the degree of anisotropy constant at , which is allows for instant spatial coarsening. For comparison we also run a test series without spatial coarsening.

  • Spatial domain , 7768 DoF

  • time-parallel processors

  • Time domain time steps with size

  • Constant

  • Material parameters = 1

  • Residual reduction tolerance

  • SAS smoother

The "Full" method always finished in 12 iterations, while the "Semi-Time" method needed 14 iterations. Without some of the idealizations assumed in the analysis it is very well possible, that the "Full" method has a marginally better convergence rate than the "Semi-Time" method. The corresponding run-times can be observed in Table 3. The Adjusted Ratio is the ratio of the runtimes multiplied by to adjust for the iterations.

#Processors 1 2 4 8 16 32 64 128 256 512
Full 35.3 35.3 35.4 39.3 49.8 49.7 50.2 50.2 50.4 52.1
Semi-Time 74.6 75.8 77.4 87.6 112.3 114.1 115.6 117.6 118.9 120.8
Adjusted Ratio 0.552 0.543 0.534 0.523 0.517 0.508 0.507 0.498 0.495 0.503
Table 3: Weak scaling results in seconds for the "Full" and the "Semi-Time" method. Adjusted ratio is Full/Semi-Time*14/12

We can clearly observe how the "Full" method scales better with barely any increase in computation times in the upper range. The sudden increase around the 16 cores mark can be most likely attributed to the architecture of the cluster with a node size of 16 cores.

It would have been interesting to run the scaling tests on even bigger clusters, where possibly a more severe discrepancy between the two methods would surface.

7 Conclusion

This paper demonstrates the difficulties in solving the eddy-current equation with a space-time multigrid method using also spatial coarsening. The non-trivial kernel of the curl-operator can be identified as the cause of these problems, and a nodal auxiliary space correction with time integration is devised to restore good convergence rates. This procedure is analogous to the method in [11], but with an additional time component.

The local Fourier analysis of the original and the modified method confirms the effectiveness of the auxiliary correction, and in fact lead the authors of this paper to devise the auxiliary correction. From the gained insights, we expect similar problems to occur whenever the spatial operator features a non-trivial kernel. Furthermore, we expect other space-time multigrid methods to experience similar problems and to profit from an implementation of the described universal correction (7). Even multigrid methods without spatial coarsening could benefit from this modification, because it also improves the convergence rates of our pure time-multigrid method.

We proceed to determine convergence rates and with numerical tests. This result holds for the 3D eddy-current equation and one specific configuration of the method. The coarsening rule passes one basic test for robustness, which involves solving test problems with vastly different local degrees of anisotropy .

We finally demonstrate with the help of weak scaling tests, how spatial coarsening can improve parallel performance.

In conclusion the explored "nodal auxiliary space correction" is a highly useful tool to enhance space-time multigrid methods for the eddy-current equation.


The research was funded by the Austrian Science Fund (FWF) under the grant W1214-N15, project DK8. We further want to thank the RICAM institute for enabling computations on the RADON1 cluster and the people associated with the NUMA institute for helpful comments and insights. We want to especially thank Ulrich Langer for fruitful discussions. Thank also goes to Bianca Klammer for additional proofreading.


  • [1] M. Bolten, D. Moser, and R. Speck (2017) A multigrid perspective on the parallel full approximation scheme in space and time. Numer. Linear Algebra Appl. 24 (6), pp. e2110, 24. External Links: ISSN 1070-5325, Document, Link, MathReview (Quoc Thong Le Gia) Cited by: §1.
  • [2] M. Bolten, D. Moser, and R. Speck (2018) Asymptotic convergence of the parallel full approximation scheme in space and time for linear problems. Numer. Linear Algebra Appl. 25 (6), pp. e2208, 22. External Links: ISSN 1070-5325, Document, Link, MathReview (Benjamin Wi-Lian Ong) Cited by: §1.
  • [3] T. Boonen, J. Van lent, and S. Vandewalle (2008) Local Fourier analysis of multigrid for the curl-curl equation. SIAM J. Sci. Comput. 30 (4), pp. 1730–1755. External Links: ISSN 1064-8275, Document, Link, MathReview (Marco Donatelli) Cited by: §4.1.
  • [4] V. A. Dobrev, Tz. Kolev, N. A. Petersson, and J. B. Schroder (2017) Two-level convergence theory for multigrid reduction in time (MGRIT). SIAM J. Sci. Comput. 39 (5), pp. S501–S527. External Links: ISSN 1064-8275, Document, Link, MathReview (Benjamin Wi-Lian Ong) Cited by: §1.
  • [5] R. D. Falgout, S. Friedhoff, Tz. V. Kolev, S. P. MacLachlan, and J. B. Schroder (2014) Parallel time integration with multigrid. SIAM J. Sci. Comput. 36 (6), pp. C635–C661. External Links: ISSN 1064-8275, Document, Link, MathReview (Bruno Carpentieri) Cited by: §1.
  • [6] R. D. Falgout, T. A. Manteuffel, B. O’Neill, and J. B. Schroder (2017) Multigrid reduction in time for nonlinear parabolic problems: a case study. SIAM J. Sci. Comput. 39 (5), pp. S298–S322. External Links: ISSN 1064-8275, Document, Link, MathReview Entry Cited by: §1.
  • [7] M. J. Gander and M. Neumüller (2016) Analysis of a new space-time parallel multigrid algorithm for parabolic problems. SIAM J. Sci. Comput. 38 (4), pp. A2173–A2208. External Links: ISSN 1064-8275, Document, Link, MathReview (Anaïs Crestetto) Cited by: §1, §2.1, §3, §4.1, §5.
  • [8] M. J. Gander (2015) 50 years of time parallel time integration. In Multiple shooting and time domain decomposition methods, Contrib. Math. Comput. Sci., Vol. 9, pp. 69–113. External Links: MathReview Entry Cited by: §1.
  • [9] S. Günther, N. R. Gauger, and J. B. Schroder (2018) A non-intrusive parallel-in-time adjoint solver with the XBraid library. Comput. Vis. Sci. 19 (3-4), pp. 85–95. External Links: ISSN 1432-9360, Document, Link, MathReview Entry Cited by: §1.
  • [10] W. Hackbusch (1984) Parabolic multigrid methods. In Computing methods in applied sciences and engineering, VI (Versailles, 1983), pp. 189–197. External Links: MathReview Entry Cited by: §1.
  • [11] R. Hiptmair (1999) Multigrid method for Maxwell’s equations. SIAM J. Numer. Anal. 36 (1), pp. 204–225. External Links: ISSN 0036-1429, Document, Link, MathReview (Jacques Rappaz) Cited by: §3.2, §7.
  • [12] C. Hofer, U. Langer, M. Neumüller, and I. Toulopoulos (2018) Time-multipatch discontinuous Galerkin space-time isogeometric analysis of parabolic evolution problems. Electron. Trans. Numer. Anal. 49, pp. 126–150. External Links: Document, Link, MathReview (H. T. Lau) Cited by: §1.
  • [13] G. Horton, S. Vandewalle, and P. Worley (1995)

    An algorithm with polylog parallel complexity for solving parabolic partial differential equations

    SIAM J. Sci. Comput. 16 (3), pp. 531–541. External Links: ISSN 1064-8275, Document, Link, MathReview Entry Cited by: §1.
  • [14] G. Horton and S. Vandewalle (1995) A space-time multigrid method for parabolic partial differential equations. SIAM J. Sci. Comput. 16 (4), pp. 848–864. External Links: ISSN 1064-8275, Document, Link, MathReview (E. G. Dyakonov) Cited by: §1, §2.2, §5.2.
  • [15] (2018) HYPRE: scalable linear solvers and multigrid methods. Note: Cited by: §5.1, §5.
  • [16] A. Kreienbuehl, A. Naegel, D. Ruprecht, R. Speck, G. Wittum, and R. Krause (2015) Numerical simulation of skin transport using Parareal. Comput. Vis. Sci. 17 (2), pp. 99–108. External Links: ISSN 1432-9360, Document, Link, MathReview Entry Cited by: §1.
  • [17] M. Lecouvez, R. D. Falgout, C. S. Woodward, and P. Top (2016-07) A parallel multigrid reduction in time method for power systems. In 2016 IEEE Power and Energy Society General Meeting (PESGM), Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §1.
  • [18] Ch. Lubich and A. Ostermann (1987) Multigrid dynamic iteration for parabolic equations. BIT 27 (2), pp. 216–234. External Links: ISSN 0006-3835, Document, Link, MathReview (Randall J. LeVeque) Cited by: §1.
  • [19] MFEM: modular finite element methods library. Note: External Links: Document Cited by: §5.
  • [20] M. L. Minion, R. Speck, M. Bolten, M. Emmett, and D. Ruprecht (2015) Interweaving PFASST and parallel multigrid. SIAM J. Sci. Comput. 37 (5), pp. S244–S263. External Links: ISSN 1064-8275, Document, Link, MathReview Entry Cited by: §1.
  • [21] P. Monk (2003) Finite element methods for Maxwell’s equations. Numerical Mathematics and Scientific Computation, Oxford University Press, New York. External Links: ISBN 0-19-850888-3, Document, Link, MathReview (Per Lötstedt) Cited by: §3.
  • [22] M. Neumüller and A. Thalhammer (2018) A fully parallelizable space-time multilevel Monte Carlo method for stochastic differential equations with additive noise. SIAM J. Sci. Comput. 40 (3), pp. C388–C400. External Links: ISSN 1064-8275, Document, Link, MathReview (Mikhail Urusov) Cited by: §1.
  • [23] M. Neumüller (2013) Space-time methods: fast solvers and applications. Ph.D. Thesis, Graz University of Technology. Cited by: §3.1.
  • [24] J. Nievergelt (1964) Parallel methods for integrating ordinary differential equations. Comm. ACM 7, pp. 731–733. External Links: ISSN 0001-0782, Document, Link, MathReview Entry Cited by: §1.
  • [25] D. Ruprecht, R. Speck, and R. Krause (2013-09-16) Parareal for diffusion problems with space- and time-dependent coefficients. In Domain Decomposition Methods in Science and Engineering XXII, Lecture Notes in Computational Science and Engineering, Vol. 104, pp. 3–10. External Links: Link Cited by: §1.
  • [26] D. Ruprecht and R. Speck (2016) Spectral deferred corrections with fast-wave slow-wave splitting. SIAM J. Sci. Comput. 38 (4), pp. A2535–A2557. External Links: ISSN 1064-8275, Document, Link, MathReview (Bülent Karasözen) Cited by: §1.
  • [27] J. B. Schroder, R. D. Falgout, C. S. Woodward, P. Top, and M. Lecouvez (2018-08) Parallel-in-time solution of power systems with scheduled events. In 2018 IEEE Power Energy Society General Meeting (PESGM), Vol. , pp. 1–5. External Links: Document, ISSN Cited by: §1.
  • [28] M. Schwalsberger (2018) A space-time parallel multigrid method for the transient eddy-current equation. Master’s Thesis, JKU Linz. Note: External Links: Link Cited by: §2.1, §2.2, §4.1, §5.2.
  • [29] R. Speck, D. Ruprecht, R. Krause, M. Emmett, M. Minion, M. Winkel, and P. Gibbon (2012-11) A massively space-time parallel n-body solver. In SC ’12: Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis, Vol. , pp. 1–11. External Links: Document, ISSN Cited by: §1.
  • [30] R. Speck, D. Ruprecht, M. Emmett, M. Minion, M. Bolten, and R. Krause (2013-09-10) A space-time parallel solver for the three-dimensional heat equation. In Parallel Computing: Accelerating Computational Science and Engineering (CSE), Advances in Parallel Computing, Vol. 25, pp. 263 – 272. External Links: Document, Link Cited by: §1.
  • [31] R. Speck, D. Ruprecht, M. Emmett, M. Minion, M. Bolten, and R. Krause (2015) A multi-level spectral deferred correction method. BIT 55 (3), pp. 843–867. External Links: ISSN 1572-9125, Document, Link Cited by: §1.
  • [32] R. Speck, D. Ruprecht, R. Krause, M. Emmett, M. Minion, M. Winkel, and P. Gibbon (2012-06-25) Integrating an N-Body Problem with SDC and PFASST. Lecture Notes in Computational Science and Engineering, Vol. 98, pp. 637 – 645. External Links: ISBN 978-3-319-05788-0 (print), Document, Link Cited by: §1.
  • [33] R. Speck, D. Ruprecht, M. Minion, M. Emmett, and R. Krause (2013-09-16) Inexact spectral deferred corrections. In Domain Decomposition Methods in Science and Engineering XXII, Lecture Notes in Computational Science and Engineering, Vol. 104, pp. 127–133. External Links: Link Cited by: §1.
  • [34] R. Speck and D. Ruprecht (2017) Toward fault-tolerant parallel-in-time integration with PFASST. Parallel Comput. 62, pp. 20–37. External Links: ISSN 0167-8191, Document, Link, MathReview Entry Cited by: §1.
  • [35] R. Speck (2018) Parallelizing spectral deferred corrections across the method. Comput. Vis. Sci. 19 (3-4), pp. 75–83. External Links: ISSN 1432-9360, Document, Link, MathReview Entry Cited by: §1.
  • [36] J. Steiner, D. Ruprecht, R. Speck, and R. Krause (2013-08-26) Convergence of Parareal for the Navier-Stokes Equations Depending on the Reynolds Number. In Numerical Mathematics and Advanced Applications - ENUMATH 2013, Lecture Notes in Computational Science and Engineering, Vol. 103, Cham, pp. 195 – 202. External Links: ISBN 978-3-319-10704-2 (print), Document, Link Cited by: §1.
  • [37] U. Trottenberg, C. W. Oosterlee, and A. Schüller (2001) Multigrid. Academic Press, Inc., San Diego, CA. Note: With contributions by A. Brandt, P. Oswald and K. Stüben External Links: ISBN 0-12-701070-X, MathReview (Dietrich Braess) Cited by: §3.2.
  • [38] S. Vandewalle (1993) Parallel multigrid waveform relaxation for parabolic problems. Teubner Skripten zur Numerik. [Teubner Scripts on Numerical Mathematics], B. G. Teubner, Stuttgart. External Links: ISBN 3-519-02717-8, Document, Link, MathReview (Erwin Schechter) Cited by: §1.
  • [39] H. von Helmholtz (1858) Über Integrale der hydrodynamischen Gleichungen, welche den Wirbelbewegungen entsprechen. J. Reine Angew. Math. 55 (3), pp. 25–55. External Links: ISSN 0075-4102 Cited by: §3.2.
  • [40] M. Winkel, R. Speck, and D. Ruprecht (2015) A high-order Boris integrator. J. Comput. Phys. 295, pp. 456–474. External Links: ISSN 0021-9991, Document, Link, MathReview (Vasile Drăgan) Cited by: §1.
  • [41] Wolfram Research, Inc. Mathematica, Version 12.0. Note: Champaign, IL, 2019 External Links: Link Cited by: §4.1.
  • [42] S. Zaglmayr (2006) High order finite element methods for electromagnetic field computation. Ph.D. Thesis, JKU Linz. Note: External Links: Link Cited by: §3.3.