A multigrid solver to the Helmholtz equation with a point source based on travel time and amplitude

12/17/2017 ∙ by Eran Treister, et al. ∙ The University of British Columbia Ben-Gurion University of the Negev 0

The Helmholtz equation arises when modeling wave propagation in the frequency domain. The equation is discretized as an indefinite linear system, which is difficult to solve at high wave numbers. In many applications, the solution of the Helmholtz equation is required for a point source. In this case, it is possible to reformulate the equation as two separate equations: one for the travel time of the wave and one for its amplitude. The travel time is obtained by a solution of the factored eikonal equation, and the amplitude is obtained by solving a complex-valued advection-diffusion-reaction (ADR) equation. The reformulated equation is equivalent to the original Helmholtz equation, and the differences between the numerical solutions of these equations arise only from discretization errors. We develop an efficient multigrid solver for obtaining the amplitude given the travel time, which can be efficiently computed. This approach is advantageous because the amplitude is typically smooth in this case, and hence, more suitable for multigrid solvers than the standard Helmholtz discretization. We demonstrate that our second order ADR discretization is more accurate than the standard second order discretization at high wave numbers, as long as there are no reflections or caustics. Moreover, we show that using our approach, the problem can be solved more efficiently than using the common shifted Laplacian multigrid approach.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 3

page 10

page 11

page 12

page 13

page 14

page 16

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The acoustic Helmholtz equation is used to model the propagation of a wave within a heterogeneous medium. Assuming constant density, the equation is given by

(1.1)

where is the pressure wave function in the frequency domain, is the angular frequency and is the “slowness” of the medium—the inverse of its velocity. The right-hand-side is used to incorporate sources into the equation. In this work we consider the case where , which models a point source at location ( is the Dirac delta function). The Helmholtz equation with a point source is common in geophysical applications, e.g. seismic modeling and full-waveform inversion [1, 2, 3, 4, 5, 6, 7].

The Helmholtz equation is accompanied with boundary conditions, which can be Neumann or Dirichlet for example. In many cases the equation is involved with absorbing boundary conditions that mimic the propagation of a wave in an open domain. One option for this is the Sommerfeld boundary condition

(1.2)

A more effective way to absorb the waves is by using a boundary layer. This can be achieved by either a perfectly matched layer (PML) [8] or an absorbing boundary layer [9, 10, 11]. To implement the latter layer, for example, we add an attenuation term to Eq. (1.1):

where is a function that quadratically goes from 0 to towards the boundaries of the domain, which attenuates the waveform towards the these boundaries [11]. The thickness of this layer is usually chosen to be about one wavelength. The same parameter can be used to impose attenuation all over the domain, but is quite small in most realistic scenarios. To ease the derivations in this paper, we henceforth ignore the boundary layer and attenuation, i.e., assume that , and focus on the equation (1.1).

We are mostly interested in problems where the frequency (or the wavenumber ) is high. In this case, the resulting linear system which arises from the discretization of (1.1) is highly indefinite. While 2D solutions can be obtained using direct methods, solving the discretized equation in 3D is challenging. That is because the discretization of the problem requires a very fine mesh and a large number of unknowns, in addition to the indefiniteness of the associated matrix [12, 13, 14].

In recent years, there has been a great effort to develop efficient solvers for systems arising from (1.1), using several different approaches to tackle the problem. One of the most common approaches is the shifted Laplacian multigrid preconditioner [11, 15, 16, 17, 12, 18, 19, 20, 21], which modifies the equation by adding complex values to the diagonal of the matrix. The modified system is then solved using a multigrid method, and is used as a preconditioner for the non-shifted system to obtain the solution of the problem. Another recent approach was recently proposed in [22, 13] for solving (1.1) in 2D and 3D respectively. The approach can be viewed as a domain decomposition method with particular boundary conditions, and can be effective in terms of iterations, but requires a large setup time and storage. This can impose challenges if the solution of (1.1) is required for multiple frequencies. Other iterative approaches include [23, 24], and [25, 26, 14, 27] which are multigrid-based.

In this paper we develop a new approach for solving the Helmholtz equation based on [14]. Rather than solve the discrete (1.1), we reformulate the problem by using the Rytov decomposition of the solution111The Rytov decomposition is usually given by . Here we use its complex conjugate, which is equivalent to using the standard decomposition.

(1.3)

Here, the waveform is decomposed into an amplitude , which changes slowly, and a phase that oscillates, involving . To reformulate (1.1) according to (1.3

), we first use the chain rule to obtain

Then, after plugging into (1.1) and multiplying the equation by , we get the following complex-valued advection-diffusion-reaction (ADR) equation:

(1.4)

where . This equation is also equivalent to

(1.5)

which does not contain the term .

We note that the function does not fully correspond to a real amplitude, as usually assumed when using the Rytov decomposition. For example, is not necessarily positive or even real valued. In the process above we artificially doubled the unknowns of Eq. (1.1) using the Rytov decomposition. Hence, for any given and frequency , the function is unique according to (1.3), and the resulting Eq. (1.4)-(1.5) are equivalent to (1.1). Their numerical solutions using (1.3) are equivalent to the numerical solution of (1.1) up to discretization and roundoff errors only, given equivalent boundary conditions (to be discussed later). For example, if we choose , then we retain the equation (1.1), and . The work in [14] aims to get a multigrid preconditioner for (1.1), and chooses to be a plane, so that the problem (1.4) becomes positive definite and easy to solve. In this work we suggest new discretizations for the Helmholtz problem based on (1.4) or (1.5). Our first goal is to choose such that the amplitude is as smooth as possible, so we get small numerical errors when we discretize (1.4) or (1.5). Our second goal is to get a linear system which can be solved efficiently by multigrid methods.

(a) The waveform
(b) The travel time
(c) The amplitude
(d) The phase
Figure 1: The solution of the Helmholtz equation with homogenous media for a point source with absorbing boundary conditions. The oscilatory waveform is given by , while the and that are calculated numerically are smooth.

To fulfil our first goal, we aim to capture most of the oscillatory behaviour of the solution by choosing appropriately, so that the corresponding amplitude is smooth. As motivation, consider the solution of (1.1) with a point source for a constant medium . In 3D, the analytical solution is , where is the euclidian distance from the source . In this case, if we choose we get , which is a rather smooth function (except at ). Figure 1 demonstrates this case in 2D. For a heterogenous medium, a similar result can be obtained by solving the eikonal equation

(1.6)

where is the Euclidean norm. This is an advection equation that requires a known initial value at some sub-region, and in the context of wave propagation, the solution has the meaning of travel time, or first arrival time, of the wave. In this work we consider the wave propagation from a point source at location , for which the travel time is 0, and hence . By choosing according to (1.6), we eliminate the last term in the left-hand-side of (1.4).

The idea of using travel time and amplitude for modeling wave propagation from a point source was studied recently in a different approach than in this paper. The Rytov decomposition (1.3) is used to get a geometrical-optics ansatz for the solution of the Helmholtz equation in the high frequency regime [28, 29, 30, 31]. There, similarly to our approach, the eikonal equation is used to eliminate the term in (1.4) yielding the travel time, and the amplitude is obtained by eliminating the term in (1.4), by solving the real-valued transport equation

(1.7)

The resulting approximation includes only the first arrival information of the wave propagation, and since (1.7) does not depend on , this approximation aims to be valid for multiple frequencies. Our approach is different in the way that the amplitude is defined, according to (1.4) instead of (1.7). This way, the amplitude is specific for a given frequency and contains all the information of the wave propagation, e.g. it includes reflections and interferences.

To compute the phase for (1.3), the eikonal equation is solved. This equation is nonlinear, and hence may have many solutions. Out of these, the solution that corresponds to the first arrival time can be computed efficiently [32, 33]. One of the most effective ways to compute it is by Fast Marching methods [34, 35, 36], which solve (1.6) directly using first or second order schemes in operations, based on the monotonicity of the solution along the characteristics. Alternatively, (1.6) can be solved iteratively by Fast Sweeping methods with first or higher order of accuracy [37, 38, 39]. The equation (1.6) can also be solved using a Lax-Friedrichs scheme [40, 41], which involves adding artificial viscosity to (1.6).

However, the methods mentioned above are not suitable for solving (1.6) for our purpose. To get an accurate solution for the amplitude in (1.7), the numerical approximation for has to be very accurate [30]. Because the analytical is non-smooth at the point source, the numerical solution of Eq. (1.6) is polluted with errors when it is computed using the aforementioned standard finite-difference methods [42]. To overcome this, [29, 30] use the factored version of the eikonal equation which was originally suggested in [43]. The new equation is obtained by setting in (1.6), where the function is known and its derivatives are computed analytically. Using the chain rule we get the factored eikonal equation for

(1.8)

The most common choice for is the distance function from the point source, i.e., . That is the analytical solution for (1.6) in the case where . The function is non-smooth at the location of the source, but the factor which needs to be computed numerically is expected to be very smooth at the surrounding of the source. Similarly to it original version, Eq. (1.8) can be solved directly by the Fast Marching method [44], or iteratively by the Fast Sweeping methods with first order accuracy [42, 45, 46], or by a Lax-Friedrichs scheme up to third order of accuracy [30, 29, 46]. The works [46, 47] suggest hybrid schemes where the factored eikonal equation is solved at the neighborhood of the source, and the standard eikonal equation is solved in the rest of the domain.

Similarly to [30], in this work we use the factored eikonal equation (1.8) to get an accurate solution for the Helmholtz equation based on (1.4)-(1.5). We apply the Fast Marching method suggested in [44] for solving (1.8), but in principle all the methods mentioned earlier for solving the factored eikonal equation can be used in our approach. Note that the equations for the amplitude that include now include in the factored case. That is a numerical approximation of the Laplacian of the factor , which may be non-smooth in areas away from the source due to discontinuities in or due to caustics. is not continuous but is bounded because of (1.8). Approximating the second derivative of a non-smooth function may yield high numerical errors. For this reason, third order Lax Friedrichs scheme, which adds smoothness to , is used in [30, 29]. However, both the equations (1.7) and (1.5) show that the problems can be formulated without , and there is no real need to approximate it numerically.


To summarize, in this work we reformulate the Helmholtz equation using ideas from [14, 29, 30, 44] to allow a more efficient numerical solution of the equation for a point source. We aim that the majority of the solution is represented by smooth functions, which have physical meanings of amplitude and travel time. Similarly to [14], we use the full ADR equation (1.4) or (1.5) for the amplitude and solve it by multigrid methods. However, instead of choosing the travel time as a plane, we solve (1.8) to find it, so that the amplitude is smooth. The smoothness of contributes to the efficiency of multigrid methods for solving (1.4), as smooth functions can be approximated well on coarser grids. The rest of the paper is organized as follows: in Section 2 we present the options that we consider for discretizing the ADR equation, and examine the similarity between the discretized Helmholtz and ADR equations. Then, in Section 3 we present the multigrid methods that we use to solve the ADR and Helmholtz equations. Finally, in section 4 we present numerical results, that first compare the accuracy of the two approaches for the Helmholtz problem, and then compare the computational effort needed to solve the equations using multigrid.

2 The discretization of the reformulated advection-diffusion-reaction equation

The Helmholtz equation (1.1) is usually discretized using the finite difference method on a regular mesh. The standard approach involves a second order scheme, resulting in a five and seven point stencils for two and three dimensions, respectively. We denote the corresponding linear system by

(2.9)

We note that using this discretization, one has to use a mesh that is fine enough, having at least 10-15 grid-point per wavelength [11, 14, 12, 13], otherwise the numerical solution is polluted by dispersion errors. This requirement leads to rather large matrices, and therefore, higher order discretizations were developed to minimize the numerical dispersion phenomenon and require fewer grid points per wavelength on the expense of larger stencils [48, 49, 50, 51]. In this work we discretize (1.4)-(1.5) using second order stencils as in [14], but note that extensions to higher order stencils can be obtained as well. In particular, we consider two types of discretizations for the ADR equations, which are different in the scheme used for the advection operator in (1.4). The first discretization includes a central difference scheme for the advection operator. We denote the resulting linear system

(2.10)

In the second discretization we use a second order upwind scheme for the advection term, and denote the resulting linear system

(2.11)

We now present how we define (2.10)-(2.11), and later show a relation between the matrices and . We present all the finite difference operators in 1D, and extensions to 2D and 3D are straightforward.

2.1 The coefficients of the equation

First, we define the coefficients of Eq. (1.4)-(1.5), involving the travel time . For this, we assume that the travel time factor is calculated by Fast Marching [44] in second order of accuracy on the same mesh. Using the chain rule, we define

(2.12)

where , , and are the known analytic solution of (1.6) for a constant medium and its analytic derivatives. To approximate in (2.12), we use the second order central difference operator

while reverting to first order operators on the domain boundaries. As an alternative, we may use the discrete gradients that where used to calculate in the factored Fast Marching algorithm. To approximate in (2.12) we use the standard second order central difference operator, and use the one sided second order second derivative operator on the boundaries, as there are no natural boundary conditions to .

2.2 The discretization of the ADR equation

Once the coefficients of (1.4)-(1.5) are known, we discretize the equations using second order finite difference operators. The Laplacian operator is discretized using a standard second order central difference operator, just as in the second order discretization of (1.1). As noted before, for the advection term we have two options:

  1. Using a second order central difference:

    (2.13)

    This operator is used in (2.10)

  2. Using a second order upwind scheme:

    (2.14)

    This operator is used in (2.11).

One important observation should be made regarding (2.14): at the immediate neighborhood of the source point (distance of one node) we cannot use the second order upwind scheme because changes signs. Therefore, for those nodes we simply use (2.13). This situation is similar to the treatment of the same points when calculating using Fast Marching. There, the algorithm reverts to first order operators for these points, but still achieves second order accuracy overall; see [44]. For discretizing the advection term in (1.5) we use similar operators.

2.2.1 Boundary conditions

The boundary conditions of the ADR equation (1.4) are identical to those of (1.1), and are defined using the chain rule. For example, for Neumann BC we have

(2.15)

The Sommerfeld BC in Eq. (1.2) is treated in a similar way, and the absorbing boundary layer in is kept as a mass matrix similarly to the case of (1.1). The boundary condition in Eq. (2.15) or its Sommerfeld version are needed for the Laplacian operator in (1.4)-(1.5), and can also be used for the advection term in this equation. Alternatively, the advection term can be discretized without boundary conditions. Since the source is inside the domain, the wave propagate from the source outwards, and the upwind discretization at the boundary points will always use internal neighboring points in (2.14). Therefore, the upwind advection term in (2.14) does not require boundary conditions. This principle is used in Fast Marching to calculate . On the other hand, the scheme (2.13) does require a different treatment at the boundaries and may be replaced with the upwind operators there (which is what we do here). Other boundary conditions for (1.1) can be adapted to (1.4) similarly to (2.15).

2.3 The relation between the discretized versions of the ADR and Helmholtz equations

The equations (1.1) and (1.4) are diagonally scaled versions of each other [14]. For the matrix in (2.9) and both matrices in (2.10)-(2.11), we have that

(2.16)

where is a diagonal matrix such that . We now show a comparison between the discrete matrices and in 1D, and again the extensions to 2D and 3D are straightforward.

Assume that we know for which (1.6) is held. A general equation of the system is given by

(2.17)

Using the Taylor expansion, we set : and get

(2.18)

Next, we use the Taylor expansion , for which we take many terms because the expression inside the exponent is of magnitude , which is typically small but much larger than . We get

(2.19)

while neglecting terms which are of magnitude lower than relatively to the magnitude of the entries of .

The first and second lines of (2.19) are identical to those in our discretization of (1.4) using (2.13). The second line suggests that the mass matrix that multiplies should be discretized with an averaging operator if we wish to make the two discretizations more similar. Looking into third row of (2.19), neglecting everything except the leading terms, we get , which leads to the eikonal equation. If we assume that eikonal equation is held, we can replace this term with , which is the discrete Laplacian of multiplied by some factor. To conclude, if we neglect the third term of the Taylor expansion for , then the discretization of (1.4) that is close to given an exact is

(2.20)

In light of this comparison, we use an averaging mass matrix in the mass term .

The comparison above shows that there is an difference between the two discretizations, which means that the solution of the two systems will be similar only if is small enough. This is also a requirement that we have for discretizing (1.1) using standard methods. In fact, [52] states that the discretization error in the Helmholtz computations is of size , which imposes an even stronger requirement on the mesh size than a small .

3 Multigrid solvers for the advection-diffusion-reaction equations

In this section we describe the multigrid approaches that we use for solving (2.10)-(2.11), and for solving (2.9) using the shifted Laplacian method. Generally, multigrid approaches aim at solving linear systems

iteratively by using two complementary processes: relaxation and coarse grid correction. The relaxation is obtained by a standard iterative method like Jacobi or Gauss-Seidel, which is only effective for reducing error that is spanned by the eigenvectors of

that correspond to relatively high eigenvalues (in magnitude). The remaining error, called “algebraically smooth”, is spanned by the eigenvectors of

corresponding to small eigenvalues (in magnitude), i.e., vectors

s.t.

(3.21)

To reduce this algebraically smooth error, multigrid methods use a coarse grid correction, where the error for some iterate

is estimated by solving a coarser system

The matrix is an approximation of the matrix on a coarser grid (the subscript denotes coarse components). The matrix

is a transfer operator that is used for projecting the residual onto the coarser grid, and interpolating

—the solution of the coarse system—back onto the fine grid, that is:

(3.22)

This process is effective if any algebraically smooth error satisfying (3.21) can be represented in the range of the interpolation . The coarse operator can be obtained by either rediscretizing the problem on a coarser grid or by the Galerkin operator

(3.23)

Algorithm 1 summarizes the process using two grids. By treating the coarse problem recursively, we obtain the multigrid V-cycle, and by treating the coarse problem recursively twice (by two recursive calls to V-cycle) we obtain a W-cycle. For more information see [53, 54, 55] and references therein.

Algorithm
  1. Apply pre-relaxations:

  2. Define and restrict the residual .

  3. Define as the solution of the coarse-grid problem .

  4. Prolong and apply coarse grid correction: .

  5. Apply post-relaxations: .

Algorithm 1 Two-grid cycle.

3.1 The Shifted Laplacian multigrid method

The shifted Laplacian multigrid method is one of the most common approaches to solve the Helmholtz equation. This method is implemented in efficient software packages [56, 20]. To solve the linear system (2.9) using the shifted Laplacian approach, one introduces a shifted system by adding a complex negative mass matrix to (2.9)

(3.24)

where is a shifting parameter. From a physical point of view this term is equivalent to adding attenuation to the equation, which means that the waves in the shifted problem decay rapidly if is large enough, resulting in a local approximation of the waveform. In the shifted Laplacian approach, the shifted matrix is used as a preconditioner for the Helmholtz linear system (2.9) inside a Krylov method, which is usually chosen to be (flexible) GMRES [57]. The preconditioning is obtained by applying a multigrid cycle for approximately inverting the shifted matrix (3.24). The larger the more efficient is the solution of the shifted system using multigrid, but the quality of the (3.24) as preconditioner deteriorates. The compromise suggested in [11] is to use , together with rather modest cycles, while [12] and [7] suggest applying more elaborate cycles.

We define the prolongation to be a bilinear interpolation operator, because the Laplacian operator in (1.1) is homogenous (does not have varying coefficients). As relaxation, the damped Jacobi method is often chosen, and its damping parameter needs to be chosen differently for each level, as on coarse grids the wave number becomes larger compared to the mesh size. Consequently, the matrix becomes more indefinite, and even negative definite at some level [58, 12]. Another choice of relaxation is the GMRES method, which automatically adapts to the matrix at each level. This relaxation method was originally suggested in [58] for the Helmholtz equation, and was recently used in [18] and [12].

Beside the type of relaxation and prolongation, one has to choose the number of levels used in the multigrid hierarchy. Unlike many other multigrid scenarios, the algebraically smooth error modes of the Helmholtz operator have a sign-changing behavior at high wave number, and therefore cannot be represented well on very coarse grids. Hence, the performance of the solver deteriorates when using more levels. For example, the results in [12] show that the best performance is achieved using three levels only, which is also what we get in our experience. The works in [12] and [17] invest more work on the second grid, as oscillatory errors are significantly better represented on this grid than on the other coarser grids. However, when using only a few levels we get a rather large coarsest grid problem, which requires a relatively accurate solution. Factorizing the coarsest grid matrices for large scale 3D problems is memory consuming and limiting. The work in [12] suggests an inexact solution of the coarsest grid using GMRES, which is the approach that we adopt in this work.

Another acceleration technique, suggested in [17], uses a recursive multilevel Krylov solver for the coarse grid problems. Such a cycle is called Krylov-cycle, and has a rather elaborated recursive structure, depending on the number of Krylov iterations at each levels. If this number is 2 (which is the common choice), we get the structure of a W-cycle—see Algorithm 2.

Algorithm
  1. Apply pre-relaxations:

  2. Define and restrict the residual .

  3. If coarsest level is reached - solve , possibly inexactly.

Otherwise, apply FGMRES(2) for starting from 0,
with as a preconditioner.
  • Prolong and apply coarse grid correction: .

  • Apply post-relaxations: .

  • Algorithm 2 Krylov multigrid cycle

    3.2 The solution of the ADR linear system

    We have two ADR linear systems in (2.10)-(2.11). The work of [14] suggests to use as a preconditioner to (2.9) where is one of the ADR matrices. From the results and local Fourier analysis in [14] we learn the following:

    1. Even at high frequency, the operator is a good preconditioner to , and the two matrices are spectrally similar. The linear system (2.10) is hard to solve using multigrid, similarly to the standard (2.9).

    2. The ADR operator is not a good preconditioner to , suggesting that the operators are indeed different. However, the ADR linear system (2.11) can be efficiently solved using multigrid.

    Even though we use a significantly different than [14], we observed the same properties in our case as well. In addition, the ADR system with a first order upwind advection operator is solved very efficiently by multigrid. This is not surprising because the first order advection operator can be obtained as a sum of a second order advection operator and a Laplacian operator which is well represented on coarser grids (for both upwind and central schemes). This is similar to having a “Laplacian shift” term of . Unlike the standard shift in (3.24), the “Laplacian shift” strongly damps oscillatory modes in , but hardly influences spatially smooth modes. Because we solve the Helmholtz equation for a point source and use an accurate travel time, most of the amplitude is smooth (up to reflections which are oscillatory). This results in a global approximation which has the global behavior of the solution corresponding to the first arrival of the wave, but has almost no reflections. In Fig. 2 we demonstrate such a global approximation compared with a local approximation obtained by the shifted Laplacian preconditioner.

    Figure 2: A demonstration of the local and global approximations of the Helmholtz solution. On the left there is a solution to (1.1) for point source with a reflective model (the Wedge model which appears later). Indeed, reflections are evident in the middle and upper parts of the domain. In the middle figure we show the solution obtained by solving the shifted problem (3.24) for the same point source, for . It is clear that the waves decay rapidly, and only 2-3 wavelengths are approximated well. This is the reason why the Shifted Laplacian method is so sensitive to the number of wavelengths in the domain. On the left we see the solution of (3.25) for the point source, which can be obtained almost as easily as the solution of (3.24). This time we see the global main wave function on the entire domain, but the discretization eliminates all the reflections.

    We solve the system (2.10) in two stages. In the first stage we compute the global approximation of the solution which corresponds to the first arrival of the wave without treating the reflections well. This is efficiently obtained by approximately solving

    (3.25)

    up to a quite low accuracy. Once (3.25) is approximately solved, we finalize the solution and add the missing reflections. To this end, we use the approximate solution of (3.25) as an initial guess, and apply the shifted Laplacian approach to the ADR system rescaled as Helmholtz

    (3.26)

    which is the opposite of . Solving (3.25) inaccurately is achieved by Algorithm 2 in very few iterations, so the most of the work in obtaining the solution is invested in the second stage. We note that in principle, we can use only the shifted Laplacian approach to solve (2.10) without the first stage, and the opposite is also true—we can use the two stages approach to treat (2.9) using the opposite rescaling.

    Preprocessing: Solve the factored eikonal equation and obtain the travel time . # Stage 1: define a global approximate solution
    1. Compute as a low-accuracy solution of .

    2. Define , .

    # Stage 2: Complete the solution using the shifted Laplacian method
    1. Define the shifted operator: .

    2. Solve using as preconditioner with multigrid, starting from .

    Algorithm 3 The two-stage solution of the central difference ADR equation.

    To solve the ADR system (2.11) using second order upwind discretization we use a preconditioner matrix

    (3.27)

    where is the ADR system with first order upwind advection in Eq. (3.25). We treat this preconditioner using Algorithm 2.

    4 Numerical results

    In this section we compare two aspects of the ADR approach for solving the Helmholtz problem. First, we compare accuracy of the numerical solution of the Helmholtz equation (1.1) when it is discretized using the standard and ADR second order discretizations, where is composed of the solutions of (1.4) and (1.6) through (1.3). To obtain the travel time we use the Fast Marching algorithm in [44] using a second order upwind discretization. Second, we compare the computational effort required to obtain the different solutions using multigrid methods.

    4.1 Accuracy comparison

    Figure 3: The four test cases for the accuracy comparison. In the upper row we show the model for each test. From left to right: the linear squared slowness model, the Gaussian Model, the wave-guide model and the wedge model. On the bottom row we show the phase according to the travel time corresponding for each model.

    In this section we empirically compare the accuracy of the different discretizations described in Section 2. We test the accuracy by solving a given problem with all discretizations, and comparing the solutions to a reference solution obtained on a four times finer grid using the standard second order discretization (2.9). Some of the test cases that we present involve caustics and reflections.

    We consider four heterogenous test cases, all on a 2D unit square, discretized on a nodal regular grid. Motivated by geophysical applications, we place the source point on the top row of the model (the surface), where we use Neumann boundary conditions. On the bottom and sides of the model we use an absorbing boundary condition to prevent reflection from the model boundaries, as the waves are supposed to continue spreading from these boundaries. We compare the obtained solutions to a reference solution calculated for the same problem on a grid, and then downsampled. The function at the source is discretized as . We consider the four models shown in top row of Fig. 3, where on the bottom are the corresponding phases . For each discretization—the standard order finite difference, the ADR with cental difference advection, and the ADR with order upwind advection—we show the obtained solution and the relative error

    4.1.1 Linear squared slowness model

    Figure 4: Accuracy comparison for the linear model. Note that the error that introduced by the standard discretization is mostly a phase error, which is not observed in the other ADR discretizations.

    In this test case the squared slowness is a linear model in the direction, starting from 0.4 at the top of the model and decreases to 0.08 at the bottom. This model is the first model from the left in Fig. 3. For this test case we use a high value of corresponding to at least 11 grid points per wavelength. Fig. 4 shows the real value of for the three discretizations, and the relative absolute error. The solution for this test does not contain reflections or caustics, and therefore the amplitude and travel time are smooth. Visually, the three solutions are similar, but the error plot shows a very high error for the standard discretization. This is a phase error that is caused by the dispersion phenomenon mentioned earlier. The ADR discretizations do not include the dispersion because the phase is obtained accurately by the relatively smooth . This example illustrates that when there are no reflections or caustics, and the travel time is smooth (except at the point source), the solution for the amplitude obtained by the ADR equation is more accurate than the standard approach.

    4.1.2 Gaussian model

    In this test case the squared slowness is a Gaussian function

    The model appears second from the left in Fig. 3, and the corresponding travel time has a discontinuity at the bottom of the model. For this test case we choose so we have at least 12 grid points per wavelength.

    Figure 5: Accuracy comparison for the Gaussian model.

    Fig. 5 shows the obtained solutions, which are visually similar. Because of the caustics, this time all approximations have significant errors, but the error is more dominant when using the standard discretization than in the ADR discretizations. The upwind ADR discretization yields the most accurate solution in general, but is comparable to the central difference ADR solution. The standard discretization again yields a solution with dispersion errors because the frequency is high.

    4.1.3 The wave-guide model

    In this test case the squared slowness is the waveguide model given by

    The model is shown second from the right in Fig. 3. Even though the model is very smooth, it generates severe caustics, which leads to inferences in the solution. Since this test case is very complicated, we use a modest with at least 20 grid points per wavelength.

    Figure 6: Accuracy comparison for the wave-guide model.

    Fig. 6 shows the obtained solutions, and in particular it shows the inferences at the bottom part. All solutions are again visually similar. Because of the caustics there are errors, and this time it is clear that the ADR discretization with central difference advection has the lowest error of the three, and the ADR with upwind advection has the highest error compared with the reference solution. Because the frequency is not so high, the standard discretization does not introduce dispersion errors.

    4.1.4 The wedge model

    This model, which is the rightmost model in Fig. 3, is usually given with a sharp step. Here, we smooth the step to be able to have a somewhat accurate fine approximation of the solution as a reference. The top half of the model is given by the function

    and the bottom half is generated by mirroring the top half. In our experiment, the step is smoothed over approximately 20 grid-points, for the coarser grid. The frequency chosen for the experiment is (), and the wavelength is at least 25 grid points. This model generates reflections from the wedge and is hard to model accurately because of the sharp change in the model.

    Figure 7: Accuracy comparison for the wedge model.

    Fig. 7 shows the obtained solutions, and in particular it shows the reflections at the top and middle parts of the model. Visually, the two left approximations are similar, but the right one includes somewhat weaker reflections at the middle part of the model. Besides that, all solutions show comparable errors with the lowest error obtained by the standard discretization. We note that particularly in this test case, we are not certain that reference solution accurately models the reflections.

    4.2 Numerical solution performance

    In this section we compare the computational effort required to solve the Helmholtz problem with the 3 discretizations described earlier. We present examples that appear in geophysical applications, where typically the length of the domain is long, and the depth of the domain is rather short. We consider two and three dimensional examples, for which we test the performance of the solver at high frequency (10-12 grid points per wavelength), and intermediate frequency (about 15-17 grid points per wavelength). For each case we present three grid sizes to demonstrate the scalability of the solvers as the mesh size and frequency grow together. In all cases, we use flexible GMRES(5) with multigrid as a preconditioner and seek a solution with relative residual accuracy of , starting from a zero initial guess. For solving the linear system (2.9) arising from the standard discretization we use the shifted Laplacian preconditioner with a shift . We use a geometric multigrid configuration that includes the Krylov multigrid cycles in Algorithm (2), with Jacobi-preconditioned GMRES as relaxation. We define our coarse grid problems by the Galerkin product . As the problem becomes more indefinite on coarser grids, we found that it is worthy applying more relaxations on the coarser grids, and therefore on level we apply pre- and post-relaxations. That is, 2 for the first level, 3 for the second and so on. Having large 3D problems in mind, we use 5 levels and approximate coarsest grid solution. As in [12], we apply 10 Jacobi-preconditioned GMRES iterations instead of a direct solve using a factorization. Because we use relatively elaborate and expensive cycles, we are able to solve the Helmholtz problem using much less cycles compared to using more standard cycles with .

    We use the same multigrid cycles for the ADR discretizations. For solving (2.11) we use (3.27) as a preconditioner treated by multigrid, and choose . This seems to be the most effective option together with our elaborated cycles for test cases at rather large scales. For solving (2.10) we apply the two stage scheme described in Algorithm 3. For the first stage, we apply at most 5 Krylov-cycles, or stop earlier if the relative residual drop with respect to (3.25) is . Once the intermediate residual drop is reached, we switch to solving (3.26) using the same shifted Laplacian configuration mentioned before until convergence, to capture the reflections in the solution.

    We show the total number of Krylov-cycles required for convergence (#), the solution time () and the time required to compute the travel time by Fast Marching (). Our code is written in Julia language [60], and is available as part of the jInv software [59] (see https://github.com/JuliaInv/ForwardHelmholtz.jl). Our two dimensional tests were computed on a laptop machine using Windows 10 64bit OS, with Intel core-i7 2.8 GHz CPU with 32 GB of RAM. The three dimensional tests were computed on a workstation with Intel Xeon E5-2620 2GHz X 2 (6 cores per socket, 2 Threads per core for a total of 24 cores) with 64 GB RAM, running on Centos 7 Linux distribution. For the 3D results we use single precision computations, to save memory. The code for the solution phase is parallelized in the matrix-vector products, while the FM code is sequential as the algorithm is sequential. The parallelism can be efficiently exploited for FM if many linear systems need to be solved, and the travel times are calculated simultaneously [59].

    4.2.1 2D linear model

    Figure 8: The linear velocity model. Units are in , and .

    Our first test case is a linear velocity model ( is the inverse squared velocity) that does not include reflections. The model is given in Figure 8, and the results are summarized in Table 1. It is clear that the shifted Laplacian method requires the most iterations to solve the problem. Moreover, it requires more iterations as the problem gets larger and the frequency is higher. The solution of (2.10), denoted as “ADR central”, requires about 20%-30% less iterations and time, thanks to the global approximation obtained in the first phase. Solving (2.11) is achieved with the least iterations and time, and more importantly - it is fairly mesh and frequency independent. Since this model does not contain reflections or caustics, then the “ADR upwind” option is the best one because it provides both accurate approximation and fast solution. It is expected to continue being so at even larger scales.

    Points per Standard ADR central ADR upwind eikonal
    Grid size wavelength # # #
    3.5 17.5 43 5.9s 34 4.2s 39 5.8s 0.33s
    5.5 11.2 67 9.1s 54 6.5s 33 4.7s 0.37s
    5.5 14.9 74 17.9s 56 13.0s 41 10.7s 0.75s
    7.5 10.9 102 24.8s 74 17.0s 40 10.4s 0.82s
    7.5 16.3 93 42.3s 64 28.0s 46 22.6s 1.64s
    11.0 11.2 151 68.2s 93 42.9s 46 22.5s 1.65s
    Table 1: Linear model: solution performance.

    4.2.2 The 2D Marmousi 2 model

    Figure 9: The Marmousi2 P-wave velocity model. Units are in , and .

    Our second test case is the P-wave velocity of the Marmousi 2 model [61], which is given in Figure (9). This model is mostly piecewise constant, and includes many reflectors. Table 2 summarizes the results for this test case. The performance of the shifted Laplacian approach is quite similar to the previous test case. The method is quite robust to the heterogeneity of the model and more sensitive to the frequency. In the “ADR central” section, we see less advantage compared to the previous smooth test case. That is because the “global” reflection-less approximation obtained from solving (3.25) is less effective. We note again that (2.10) can be solved using the shifted Laplacian method alone with the same efficiency as (2.9). The third option “ADR upwind” is again achieved in less iterations and time, and is again more mesh independent than the other options.

    Points per Standard ADR central ADR upwind eikonal
    Grid size wavelength # # #
    3.0 17.3 42 5.7s 35 4.6s 41 6.1s 0.46s
    4.5 11.5 64 8.8s 50 7.0s 42 6.2s 0.47s
    4.5 15.4 61 16.5s 52 13.1s 55 14.6s 0.93s
    6.5 10.8 115 28.1s 93 22.4s 52 13.9s 0.84s
    6.5 16.0 95 42.8s 79 36.1s 63 31.4s 1.8s
    9.0 11.5 164 76.8s 124 58.1s 54 26.7s 1.8s
    Table 2: Marmousi 2 model: solution performance.

    4.2.3 The 3D linear model

    Points per Standard ADR central ADR upwind eikonal
    Grid size wavelength # # #
    1.5 13.6 18 115s 18 87s 17 107s 16.5s
    2.0 10.2 22 140s 23 110s 18 116s 17s
    2.0 15.3 23 212s 21 160s 19 170s 60.5s
    3.0 10.2 33 302s 32 242s 20 180s 62s
    3.0 13.7 35 534s 30 401s 21 311s 165s
    4.0 10.2 41 620s 36 490s 24 352s 159s
    Table 3: 3D Linear model: solution performance.

    In this experiment we consider the three dimensional version of the linear model in Fig. 8, which is smooth and does not introduce reflections. Table 3

    summarizes the results for this test case. Because of memory limitations, the resolution and frequencies are much lower here than in 2D, and therefore, the iteration counts are much lower than in 2D. In terms of iteration counts, we again see that the standard discretization is taking the most iterations, while the other ADR systems are solved in less iterations (with upwind ADR taking the least). The time-per-iteration in the ADR central column are lower than in the standard column, probably because of less work that is done on coarser grids. That is, in Algorithm

    2, some cycles do not include a second recursive call because a threshold is achieved in FGMRES (we use 0.1 as a threshold for the coarse grid as suggested in [62]). In light of the 2D results, we do not expect this to be a significant advantage of the ADR discretization over the standard discretization in larger problems. The cost of the FM preprocessing is higher now compared to the 2D case, because in addition to the problem size, the algorithm is more complicated in this case. Still, the solution time of FM is not very significant, considering that the multigrid solution timings are obtained using a highly parallelized code (16 cores), while FM is completely serial. This can be exploited if many systems are required to be solved. As the problem gets larger, and the multigrid solution takes more and more iterations, the FM cost will become less and less significant.
    Remark: although the cost of each cycle in our solver is of linear complexity, the timings in Table 3 scale slightly better than linearly. That is a result of better performance in the parallelism of our code—as the problem grows, the efficiency of the matrix-vector multiplications is better. This behaviour is obviously independent of the method.

    4.2.4 The 3D Overthrust model

    Figure 10: The SEG Overthrust velocity model. Units are in , and . The model corresponds to a domain of km.
    Points per Standard ADR central ADR upwind eikonal
    Grid size wavelength # # #
    2.0 14.4 14 89s 13 62s 19 118s 16.5s
    3.0 9.6 20 127s 19 91s 23 146s 17.2s
    3.0 14.4 20 185s 18 137s 26 236s 67s
    4.0 10.8 26 241s 24 180s 25 226s 64s
    4.0 14.4 27 409s 22 294s 34 505s 170s
    6.0 9.6 44 674s 37 504s 32 474s 177s
    Table 4: 3D Overthrust model: solution performance.

    The last model that we present is the 3D SEG Overthrust model [63], which, similarly to the Marmousi 2 model, includes many reflecting layers. The model appears in Fig. 10, and Table 4 summarizes the performance results. At the smaller sizes, the upwind ADR discretization requires the most effort to solve, but it becomes more efficient compared with the other to discretizations when the problem gets larger. The shifted Laplacian approach used for the standard and central ADR discretizations yields comparable counts for both. Again, there is a slight edge to ADR central thanks to Stage 1 in Algorithm 3. We expect that at larger scales we will see results which are similar to the results in the 2D Marmousi 2 model.

    5 Conclusions and future work

    In this paper we present a new approach for discretizing and solving the Helmholtz equation with a point source. We reformulate the problem based on the Rytov decomposition of the solution, yielding an eikonal equation for the phase and a complex-valued advection-–diffusion-–reaction equation for the amplitude. We choose the phase based on the travel time of the wave, and compute it based on the factored eikonal equation using the Fast Marching algorithm. The factored version yields a more accurate treatment of the point source for the phase, and a relatively smooth solution for the amplitude. The ADR equation is discretized using second order upwind and central difference discretizations, and the solution of the system is achieved using multigrid. Our approach has two main advantages. First, the majority of the solution of the Helmholtz equation is represented by smooth functions, and hence the reformulated problems is more suitable for multigrid computations. Secondly, the obtained solution is not a first arrival anzatz only—it includes all the information of the wave propagation including reflections and inferences.

    Our accuracy results show that for models that do not introduce caustics and reflections, our approach yields more accurate solutions than standard Helmholtz discretization, and in particular, does not introduce a phase error. When reflections and caustics are observed, the accuracy of the ADR discretizations is comparable to the standard one. Our performance results show that the standard and central ADR discretizations are both hard to solve. The ADR system using upwind discretization is solved more efficiently, but is less accurate than the ADR discretization with central difference in the presence of reflections.

    Our approach is intriguing for geophysical applications, especially full waveform inversion (FWI) where many solutions of the Helmholtz equations are required for a point source. There, the model is adapted iteratively, starting from a smooth model like in Fig. 8, until a detailed model like in Fig. 9 is estimated in the final stages. Smooth models are encountered frequently, and hence our approach can be beneficial computationally. More importantly, our ADR discretization does not introduce dispersion errors for the main wave, which may be very hard to overcome in the inversion process in FWI.

    Our future research aims at exploring the advantages of the new discretizations in the context of FWI, and to further improve the numerical solver for the Helmholtz equation at larger scales and higher wave number. In addition, we aim to extend the presented approach for a general right hand side, which will allow the use of the whole method as a preconditioner.

    6 Acknowledgement

    The research leading to these results has received funding from the European Union’s - Seventh Framework Programme (FP7/2007-2013) under grant agreement number 623212 - MC Multiscale Inversion.

    References

    • [1] Pratt R. Seismic waveform inversion in the frequency domain, part 1: Theory, and verification in a physical scale model. Geophysics 1999; 64:888–901.
    • [2] Epanomeritakis I, Akcelik V, Ghattas O, Bielak J. A Newton-CG method for large-scale three-dimensional elastic full-waveform seismic inversion. Inverse Problems 2008; .
    • [3] Virieux J, Operto S. An overview of full-waveform inversion in exploration geophysics. Geophysics 2009; 74(6):WCC1–WCC26.
    • [4] Krebs JR, Anderson JE, Hinkley D, Neelamani R, Lee S, Baumstein A, Lacasse MD. Fast full-wavefield seismic inversion using encoded sources. Geophysics 2009; 74(6):WCC177–WCC188.
    • [5] Biondi B, Almomin A. Simultaneous inversion of full data bandwidth by tomographic full-waveform inversion. Geophysics 2014; 79(3):WA129–WA140.
    • [6] Métivier L, Brossier R, Virieux J, Operto S. Full waveform inversion and the truncated newton method. SIAM Journal on Scientific Computing 2013; 35(2):B401–B437.
    • [7] Treister E, Haber E. Full waveform inversion guided by travel time tomography. SIAM J. Sci. Comput. 2017; 39(5):S587––S609.
    • [8] Singer I, Turkel E. A perfectly matched layer for the helmholtz equation in a semi-infinite strip. Journal of Computational Physics 2004; 201(2):439–465.
    • [9] Engquist B, Majda A. Radiation boundary conditions for acoustic and elastic wave calculations. Communications on pure and applied mathematics 1979; 32(3):313–357.
    • [10] Liao Q, McMechan GA. Multifrequency viscoacoustic modeling and inversion. Geophysics 1996; 61(5):1371–1378.
    • [11] Erlangga YA, Oosterlee CW, Vuik C. A novel multigrid based preconditioner for heterogeneous Helmholtz problems. SIAM Journal on Scientific Computing 2006; 27(4):1471–1492.
    • [12] Calandra H, Gratton S, Pinel X, Vasseur X. An improved two-grid preconditioner for the solution of three-dimensional Helmholtz problems in heterogeneous media. Numerical Linear Algebra with Applications 2013; 20(4):663–688.
    • [13] Poulson J, Engquist B, Li S, Ying L. A parallel sweeping preconditioner for heterogeneous 3D Helmholtz equations. SIAM Journal on Scientific Computing 2013; 35(3):C194–C212.
    • [14] Haber E, MacLachlan S. A fast method for the solution of the Helmholtz equation. Journal of Computational Physics 2011; 230(12):4403–4418.
    • [15] Oosterlee C, Vuik C, Mulder W, Plessix RE. Shifted-laplacian preconditioners for heterogeneous Helmholtz problems. Advanced Computational Methods in Science and Engineering. Springer, 2010; 21–46.
    • [16] Airaksinen T, Heikkola E, Pennanen A, Toivanen J. An algebraic multigrid based shifted-laplacian preconditioner for the Helmholtz equation. Journal of Computational Physics 2007; 226(1):1196–1210.
    • [17] Erlangga YA, Nabben R. On a multilevel Krylov method for the Helmholtz equation preconditioned by shifted laplacian. Electronic Transactions on Numerical Analysis 2008; 31(403-424):3.
    • [18] Cools S, Reps B, Vanroose W. A new level-dependent coarse grid correction scheme for indefinite Helmholtz problems. Numerical Linear Algebra with Applications 2014; 21(4):513–533.
    • [19] Tsuji P, Tuminaro R. Augmented amg-shifted laplacian preconditioners for indefinite helmholtz problems. Numerical Linear Algebra with Applications 2015; 22(6):1077–1101.
    • [20] Reps B, Weinzierl T. Complex additive geometric multilevel solvers for helmholtz equations on spacetrees. ACM Transactions on Mathematical Software (TOMS) 2017; 44(1):2.
    • [21] Ganesh M, Morgenstern C. An efficient multigrid algorithm for heterogeneous acoustic media sign-indefinite high-order fem models. Numerical Linear Algebra with Applications 2017; 24(3).
    • [22] Engquist B, Ying L. Sweeping preconditioner for the Helmholtz equation: hierarchical matrix representation. Communications on pure and applied mathematics 2011; 64(5):697–735.
    • [23] van Leeuwen T, Gordon D, Gordon R, Herrmann FJ. Preconditioning the Helmholtz equation via row-projections. 74th EAGE Conference & Exhibition, 2012.
    • [24] Gordon D, Gordon R. Robust and highly scalable parallel solution of the Helmholtz equation with large wave numbers. Journal of Computational and Applied Mathematics 2013; 237(1):182–196.
    • [25] Brandt A, Livshits I. Wave-ray multigrid method for standing wave equations. Electron. Trans. Numer. Anal 1997; 6:162–181.
    • [26] Olson LN, Schroder JB. Smoothed aggregation for Helmholtz problems. Numerical Linear Algebra with Applications 2010; 17(2-3):361–386.
    • [27] Livshits I. A scalable multigrid method for solving indefinite Helmholtz equations with constant wave numbers. Numerical Linear Algebra with Applications 2014; 21(2):177–193.
    • [28] Leung S, Qian J, Burridge R. Eulerian Gaussian beams for high-frequency wave propagation. Geophysics 2007; 72(5):SM61–SM76.
    • [29] Luo S, Qian J. Factored singularities and high-order lax–friedrichs sweeping schemes for point-source traveltimes and amplitudes. Journal of Computational Physics 2011; 230(12):4742–4755.
    • [30] Luo S, Qian J, Zhao H. Higher-order schemes for 3D first-arrival traveltimes and amplitudes. Geophysics 2012; 77(2):T47–T56.
    • [31] Luo S, Qian J, Burridge R. Fast Huygens sweeping methods for Helmholtz equations in inhomogeneous media in the high frequency regime. Journal of Computational Physics 2014; 270:378–401.
    • [32] Crandall MG, Lions PL. Viscosity solutions of Hamilton-Jacobi equations. Transactions of the American Mathematical Society 1983; 277(1):1–42.
    • [33] Rouy E, Tourin A. A viscosity solutions approach to shape-from-shading. SIAM Journal on Numerical Analysis 1992; 29(3):867–884.
    • [34] Tsitsiklis JN. Efficient algorithms for globally optimal trajectories. Automatic Control, IEEE Transactions on 1995; 40(9):1528–1538.
    • [35] Sethian JA. A fast marching level set method for monotonically advancing fronts. Proceedings of the National Academy of Sciences 1996; 93(4):1591–1595.
    • [36] Sethian JA. Fast marching methods. SIAM review 1999; 41(2):199–235.
    • [37] Tsai YHR, Cheng LT, Osher S, Zhao HK. Fast sweeping algorithms for a class of Hamilton-Jacobi equations. SIAM journal on numerical analysis 2003; 41(2):673–694.
    • [38] Zhao H. A fast sweeping method for eikonal equations. Mathematics of computation 2005; 74(250):603–627.
    • [39] Zhang YT, Zhao HK, Qian J. High order fast sweeping methods for static Hamilton-Jacobi equations. Journal of Scientific Computing 2006; 29(1):25–56.
    • [40] Kao CY, Osher S, Qian J. Lax–friedrichs sweeping scheme for static Hamilton-Jacobi equations. Journal of Computational Physics 2004; 196(1):367–391.
    • [41] Qian J, Zhang YT, Zhao HK. A fast sweeping method for static convex Hamilton-Jacobi equations. Journal of Scientific Computing 2007; 31(1):237–271.
    • [42] Fomel S, Luo S, Zhao H. Fast sweeping method for the factored eikonal equation. Journal of Computational Physics 2009; 228(17):6440–6455.
    • [43] Pica A, et al.. Fast and accurate finite-difference solutions of the 3D eikonal equation parametrized in celerity. 67th Ann. Internat. Mtg, Soc. of Expl. Geophys 1997; :1774–1777.
    • [44] Treister E, Haber E. A fast marching algorithm for the factored eikonal equation. Journal of Computational Physics 2016; 324:210–225.
    • [45] Luo S, Qian J. Fast sweeping methods for factored anisotropic eikonal equations: multiplicative and additive factors. Journal of Scientific Computing 2012; 52(2):360–382.
    • [46] Luo S, Qian J, Burridge R. High-order factorization based high-order hybrid fast sweeping methods for point-source eikonal equations. SIAM Journal on Numerical Analysis 2014; 52(1):23–44, doi:10.1137/120901696. URL http://dx.doi.org/10.1137/120901696.
    • [47] Noble M, Gesret A, Belayouni N. Accurate 3-d finite difference computation of traveltimes in strongly heterogeneous media. Geophysical Journal International 2014; 199(3):1572–1585.
    • [48] Singer I, Turkel E. High-order finite difference methods for the helmholtz equation. Computer Methods in Applied Mechanics and Engineering 1998; 163(1-4):343–358.
    • [49] Singer I, Turkel E. Sixth-order accurate finite difference schemes for the helmholtz equation. Journal of Computational Acoustics 2006; 14(03):339–351.
    • [50] Operto S, Virieux J, Amestoy P, L’Excellent JY, Giraud L, Ali HBH. 3d finite-difference frequency-domain modeling of visco-acoustic wave propagation using a massively parallel direct solver: A feasibility study. Geophysics 2007; 72(5):SM195–SM211.
    • [51] Turkel E, Gordon D, Gordon R, Tsynkov S. Compact 2d and 3d sixth order schemes for the helmholtz equation with variable wave number. Journal of Computational Physics 2013; 232(1):272–287.
    • [52] Bayliss A, Goldstein CI, Turkel E. On accuracy conditions for the numerical computation of waves. Journal of Computational Physics 1985; 59(3):396–404.
    • [53] Briggs WL, Henson VE, McCormick SF. A multigrid tutorial. Second edn., SIAM, 2000.
    • [54] Trottenberg U, Oosterlee C, Schüller A. Multigrid. Academic Press: London and San Diego, 2001.
    • [55] Yavneh I. Why multigrid methods are so efficient. IEEE: Computing in Science and Engineering 2006; 8(6):12–22.
    • [56] Knibbe H, Oosterlee CW, Vuik C. GPU implementation of a Helmholtz Krylov solver preconditioned by a shifted laplace multigrid method. Journal of Computational and Applied Mathematics 2011; 236(3):281–293.
    • [57] Saad Y. A flexible inner-outer preconditioned GMRES algorithm. SIAM Journal on Scientific Computing 1993; 14(2):461–469.
    • [58] Elman HC, Ernst OG, O’leary DP. A multigrid method enhanced by krylov subspace iteration for discrete helmholtz equations. SIAM Journal on scientific computing 2001; 23(4):1291–1315.
    • [59] Ruthotto L, Treister E, Haber E. jInv – a flexible Julia package for PDE parameter estimation. SIAM J. Sci. Comput. 2017; 39(5):S702––S722.
    • [60] Bezanson J, Edelman A, Karpinski S, Shah VB. Julia: A fresh approach to numerical computing. SIAM Review 2017; 59(1):65–98, doi:10.1137/141000671. URL http://julialang.org/publications/julia-fresh-approach-BEKS.pdf.
    • [61] Martin GS, Wiley R, Marfurt KJ. Marmousi2: An elastic upgrade for marmousi. The Leading Edge 2006; 25(2):156–166.
    • [62] Notay Y, Vassilevski PS. Recursive Krylov-based multigrid cycles. Numerical Linear Algebra with Applications 2008; 15(5):473–487.
    • [63] Aminzadeh F, Jean B, Kunz T. 3-D salt and overthrust models. Society of Exploration Geophysicists, 1997.