Performance prediction of finite-difference solvers for different computer architectures

The life-cycle of a partial differential equation (PDE) solver is often characterized by three development phases: the development of a stable numerical discretization, development of a correct (verified) implementation, and the optimization of the implementation for different computer architectures. Often it is only after significant time and effort has been invested that the performance bottlenecks of a PDE solver are fully understood, and the precise details varies between different computer architectures. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop. We show how discretization of a wave equation can be theoretically studied to understand the performance limitations of the method on modern computer architectures. We focus on the roofline model, now broadly used in the high-performance computing community, which considers the achievable performance in terms of the peak memory bandwidth and peak floating point performance of a computer with respect to algorithmic choices. A first principles analysis of operational intensity for key time-stepping finite-difference algorithms is presented. With this information available at the time of algorithm design, the expected performance on target computer systems can be used as a driver for algorithm design.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 4

08/06/2018

Devito: an embedded domain-specific language for finite differences and geophysical exploration

We introduce Devito, a new domain-specific language for implementing hig...
04/14/2020

Scalability of High-Performance PDE Solvers

Performance tests and analyses are critical to effective HPC software de...
12/29/2020

Scalable Parallel Linear Solver for Compact Banded Systems on Heterogeneous Architectures

A scalable algorithm for solving compact banded linear systems on distri...
04/06/2021

A Latent space solver for PDE generalization

In this work we propose a hybrid solver to solve partial differential eq...
07/12/2020

Minimod: A Finite Difference solver for Seismic Modeling

This article introduces a benchmark application for seismic modeling usi...
11/23/2021

A Massively Parallel Implementation of Multilevel Monte Carlo for Finite Element Models

The Multilevel Monte Carlo (MLMC) method has proven to be an effective v...
02/01/2017

Machines and Algorithms

I discuss the evolution of computer architectures with a focus on QCD an...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The increasing complexity of modern computer architectures means that developers are having to work much harder at implementing and optimising scientific modelling codes for the software performance to keep pace with the increase in performance of the hardware. This trend is driving a further specialisation in skills such that the geophysicist, numerical analyst and software developer are increasingly unlikely to be the same person. One problem this creates is that the numerical analyst makes algorithmic choices at the mathematical level that define the scope of possible software implementations and optimizations available to the software developer. Additionally, even for an expert software developer it can be difficult to know what are the right kind of optimisations that should be considered, or even when an implementation is ”good enough” and optimisation work should stop. It is common that performance results are presented relative to a previously existing implementation, but such a relative measure of performance is wholly inadequate as the reference implementation might well be truly terrible. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop.

Performance models such as the roofline model by (Williams et al., 2009) help establish statistics for best case performance — to evaluate the performance of a code in terms of hardware utilization (e.g. percentage of peak floating point performance) instead of a relative speed-up. Performance models that establish algorithmic optimality and provide a measure of hardware utilization are increasingly used to determine effective algorithmic changes that reliably increase performance across a wide variety of algorithms (Asanovic et al., 2006). However, for many scientific codes used in practice, wholesale algorithmic changes, such as changing the spatial discretization or the governing equations themselves, are often highly invasive and require a costly software re-write. Establishing a detailed and predictive performance model for the various algorithmic choices is therefore imperative when designing the next-generation of industry scale codes.

We establish a theoretical performance model for explicit wave-equation solvers as used in full waveform inversion (FWI) and reverse time migration (RTM). We focus on a set of widely used equations and establish lower bounds on the degree of the spatial discretization required to achieve optimal hardware utilization on a set of well known modern computer architectures. Our theoretical prediction of performance limitations may then be used to inform algorithmic choice of future implementations and provides an absolute measure of realizable performance against which implementations may be compared to demonstrate their computational efficiency.

For the purpose of this paper we will only consider explicit time stepping algorithms based on a second order time discretization. Extension to higher order time stepping scheme will be briefly discussed at the end. The reason we only consider explicit time stepping is that it does not involve any matrix inversion, but only scalar product and additions making the theoretical computation of the performance bounds possible. The performance of other classical algorithm such as matrix vector products or FFT as described by

(Patterson and Hennessy, 2007) has been included for illustrative purposes.

2 Introduction to stencil computation

A stencil algorithm is designed to update or compute the value of a field in one spatial location according to the neighbouring ones. In the context of wave-equation solver, the stencil is defined by the support (grid locations) and the coefficients of the finite-difference scheme. We illustrate the stencil for the Laplacian, defining the stencil of the acoustic wave-equation (Eq. 8), and for the rotated Laplacian used in the anisotropic wave-equation (Eq. 10,  11) on Fig. 12. The points coloured in blue are the value loaded while the point coloured in red correspond to a written value.

a) b) c) d)
Figure 1: Stencil for the acoustic and anisotropic wave-equation for different orders of discretization. A new value for the centre point (red) is obtained by weighted sum of the values in all the neighbour points (blue). a) 2nd order laplacian, b) second order rotated Laplacian, c) 16th order Laplacian, d) 16th order rotated Laplacian
a) b)
Figure 2: Stencil for the 16th order acoustic and anisotropic wave-equation with distance to centre highlighting a) Laplacian, b) rotated Laplacian

The implementation of a time stepping algorithm for a wavefield , solution of the acoustic wave-equation (Eq. 8) is straightforward from the representation of the stencil. We do not include the absorbing boundary conditions (ABC) as depending on the choice of implementation it will either be part of the stencil or be decoupled and treated separately.

  for  to  do
     for  do
        
     end for
     Add Source :
  end for
Algorithm 1 Time-stepping

In Algorithm  1, is the set of all grid positions in the computational domain, are the local indices , are the indices of the stencil positions for the centre position and is the number of time steps and is the source term decoupled from the stencil. In the following we will concentrate on the stencil itself, as the loops in space and time do not impact the theoretical performance model we introduce. The roofline model is solely based on the amount of input/output (blue/red in the stencils) and arithmetic operations (number of sums and multiplication) required to update one grid point, and we will prove that the optimal reference performance is independent of the size of the domain (number of grid points) and of the number of time steps.

Notes on parallelization:

Using a parallel framework to improve an existing code is one of the most used tool in the current stencil computation community. It is however crucial to understand that this is not an algorithmic improvement from the operational intensity. We will prove that the algorithmic efficiency of a stencil code is independent of the size of the model, and will therefore not be impacted by a domain-decomposition like parallelization via OpenMP or MPI. The results shown in the following are purely dedicated to help the design of a code from an algorithmic point of view, while parallelization will only impact the performance of the implemented code by improving the hardware usage.

3 Roofline Performance Analysis

The roofline model is a performance analysis framework designed to evaluate the floating point performance of an algorithm by relating it to memory bandwidth usage (Williams et al., 2009). It has proved to be very popular because it provides a readily comprehensible performance metric to interpret runtime performance of a particular implementation according to the achievable optimal hardware utilization for a given architecture (Williams and Patterson, 2008). This model has been applied to real-life codes in the past to analyze and report performance including oceanic climate models Epicoco et al. (2014), combustion modeling Chan et al. (2013) and even seismic imaging Andreolli et al. (2014). It has also been used to evaluate the effectiveness of implementation-time optimizations like autotuning Datta and Yelick (2009), or cache-blocking on specific hardware platforms like vector processors Sato et al. (2009) and GPUs Kim et al. (2011). Tools are available to plot the machine-specific parameters of the roofline model automatically Lo et al. (2014). When more information about the target hardware is available, it is possible to refine the roofline model into the cache-aware roofline model which gives more accurate predictions of performance Ilic et al. (2014). The analysis presented here can be extended to the cache-aware roofline model but in order to keep it general, we restrict it to the general roofline model.

The roofline model has also been used to compare different types of basic numerical operations to predict their performance and feasibility on future systems Barba and Yokota (2013), quite similar to this paper. However, in this paper, instead of comparing stencil computation to other numerical methods, we carry out a similar comparison between numerical implementations using different stencil sizes. This provides an upper-bound of performance on any hardware platform at a purely conceptual stage, long before the implementation of the algorithm.

Other theoretical models to predict upper-bound performance of generic code on hypothetical hardware have been built Lai and Seznec (2013); Wahib and Maruyama (2014); Hofmann et al. (2015a); Duplyakin et al. (2016) but being too broad in scope, can not be used to drive algorithmic choice like choosing the right discretization order. Some of these models have also been applied to stencil codes Stengel et al. (2015); Datta et al. (2009), however the analysis was of a specific implementation and could not be applied in general. There are many tools to perform performance prediction at the code-level Hammer et al. (2015); Narayanan et al. (2010); Unat et al. (2015); Rahman et al. (2011). However, any tool that predicts performance based on a code is analyzing the implementation and not the algorithm in general. Although performance modeling is a deep and mature field, most work is restricted to modeling the performance of specific implementations in code. Hofmann et al. makes a comparison quite similar to the one we do here where two algorithmic choices for the same problem are being compared with a performance model.

In this section we demonstrate how one creates a roofline model for a given computer architecture, and derives the operational intensity for a given numerical algorithm. This establishes the theoretical upper-bound for the performance of a specific algorithm on that architecture. A general roofline performance analysis consists of three steps:

  • The memory bandwidth, bytes per second, and the peak number of floating point operations per second (FLOPS) of the computer architecture are established either from the manufacturers specification or through measurement using standard benchmarks.

  • The operational intensity (OI) of the algorithm is established by calculating the ratio of the number of floating point operations performed to memory traffic, FLOPs per byte. This number characterizes the algorithmic choices that affect performance on a computer system. In combination with the measured memory bandwidth and peak performance of a computer architecture, this provides a reliable estimate of the maximum achievable performance.

  • The solver is benchmarked in order to establish the achieved performance. A roofline plot can be created to illustrate how the achieved performance compares to the maximum performance predicted by the roofline for the algorithms OI. This establishes a measure of optimality of the implementation, or alternatively the maximum possible gain from further optimization of the software.

3.1 Establishing the Roofline

The roofline model characterises a computer architecture using two parameters: the maximum memory bandwidth, , in units of ; and the peak FLOPS achievable by the hardware, . The maximally achievable performance is modelled as:

(1)

where is the OI.

As illustrated in Fig. 3 this limitation defines two distinct regions:

  • Memory-bound: The region left of the ridge point constitutes algorithms that are limited by the amount of data coming into the CPU from memory. Memory-bound codes typically prioritise caching optimizations, such as data reordering and cache blocking.

  • Compute-bound: The region right of the ridge point contains algorithms that are limited by the maximum performance of the arithmetic units in the CPU and thus defines the maximum achievable performance of the given architecture. Compute-bound codes typically prioritise vectorization to increase throughput.

Figure 3: Roofline diagram showing the operational intensity of three well-known algorithms as reported by Williams et al. (2009)

: sparse matrix-vector multiplication (SpMV), stencil computation and 3D Fast Fourier Transform (3DFFT). The hardware limits are taken from

Andreolli et al. (2015) and the compute-limited area is highlighted through shading.

It is worth noting that changing from single to double-precision arithmetic halves the OI because the volume of memory that must be transferred between the main memory and the CPU is doubled. The peak performance will be impacted as well, since the volume of data and the number of concurrently used floating point units (FPU) changes. As commonly employed by industry, we assume single precision arithmetic for the examples presented here, but it is straightforward to extend to double precision.

Andreolli et al. (2015) illustrates an example of deriving the theoretical performance for a system that consists of two Intel Xeon E5-2697 v2 (2S-E5) with 12 cores per CPU each running at 2.7 Ghz without turbo mode. Since these processors support 256-bit SIMD instructions they can process eight single-precision operations per clock-cycle (SP FP). Further, taking into account the use of Fused Multiply-Add (FMA) operations (two per cycle), this yields

Clearly, this assumes full utilization of two parallel pipelines for Add and Multiply operations.

A similar estimate for the peak memory bandwidth can be made from the memory frequency (), the number of channels () and the number of bytes per channel () and the number of CPUs () to give .

It is important to note here that there is an instruction execution overhead that the above calculations did not take into account and therefore these theoretical peak numbers are not achievable ( is achievable in practice  (Andreolli et al., 2015)). For this reason, two benchmark algorithms, STREAM TRIAD for memory bandwidth (McCalpin, 1995, 2007) and LINPACK for floating point performance (Dongarra, 1988), are often used to measure the practical limits of a particular hardware platform. These algorithms are known to achieve a very high percentage of the peak values and are thus indicative of practical hardware limitations.

3.2 Performance Model

The key measure to using the roofline analysis as a guiding tool for algorithmic design decisions and implementation optimization is the operational intensity, , as it relates the number of FLOPs to the number of bytes moved to and from RAM. clearly does not capture many important details about the implementation such as numerical accuracy or time to solution. Therefore, it is imperative to look at in combination with these measures when making algorithmic choices.

Here we analyze the algorithmic bounds of a set of finite-difference discretizations of the wave-equation using different stencils and spatial orders. We therefore define algorithmic operational intensity in terms of the total number of FLOPs required to compute a solution, and we assume that our hypothetical system has a cache with infinite size and no latency inducing zero redundancy in memory traffic (Williams and Patterson, 2008). This acts as a theoretical upper bound for the performance of any conceivable implementation.

We furthermore limit our theoretical investigation to analysing a single time step as an indicator of overall achievable performance. This assumption allows us to generalize the total number of bytes in terms of the number of spatially dependant variables (e.g. wavefields, physical properties) used in the discretized equation as , where is the number of variables whose value is being loaded, is the number of variables whose value is being stored, is the number of grid points and is the number of bytes per single-precision floating point value. The term arises from the fact that most computer architectures will load a cache line before it gets overwritten completely. However, some computer architectures, such as the Intel Xeon Phi, have support for stream stores, so that values can be written directly to memory without first loading the associated cache line, in which case the expression for the total data movement becomes . It is important to note here that limiting the analysis to a single time step limits the scope of the infinite caching assumption above.

Since we have assumed a constant grid size across all spatially dependant variables, we can now parametrize the number of FLOPs to be computed per time step as , where is a function that defines the number of flops performed to update one grid point in terms of the stencil size used to discretize spatial derivatives. Additional terms can be added corresponding to source terms and boundary conditions but they are a small proportion of the time step in general and are neglected here for simplicity. This gives us the following expression for OI as a function of , :

(2)

4 Operational intensity for finite-differences

We derive a general model for the operational intensity of wave-equation PDEs solvers with finite-difference discretizations using explicit time stepping and apply it to three different wave-equation formulations commonly used in the oil and gas exploration community: an acoustic anisotropic wave-equation; vertical transverse isotropic (VTI); and tilted transversely isotropic (TTI)  (Liu et al., 2009). The theoretical operational intensity for the 3D discretized equations will be calculated as a function of the finite-difference stencil size , which allows us to make predictions about the minimum discretization order required for each algorithm to reach the compute-bound regime for a target computer architecture. For completeness we describe the equations in Appendix  A.

4.1 Stencil operators

As a baseline for the finite-difference discretization, we consider the use of a 1D symmetric stencil of size , which uses values of the discretized variable to compute any spatial derivatives enforcing a fixed support for all derivatives. Other choices of discretization are possible, such as choosing the stencil for the first derivative and applying it iteratively to obtain high order derivatives. Our analysis will still be valid but require a rewrite of the following atomic operation count. The number of FLOPs used for the three types of derivatives involved in our equation are calculated as:

  • first order derivative with respect to ():

  • second order derivative with respect to ():

  • second order cross derivative with respect to ():

where in 3D, correspond to the three dimensions and is the discretized field.

Equation mult add duplicates
Acoustic: 0 0 3 5
VTI: 0 0 5 5
TTI: 0 44 17
Table 1: Derivation of per stencil invocation for each equation.

Computing the total wavefield memory volume for each equation we have for Acoustic (load velocity, two previous time steps and write the new time step), for VTI (load velocity, two anisotropy parameters, two previous time steps for two wavefields and write the new time step for the two wavefields) and for TTI (VTI plus 6 precomputed cos/sin of the tilt and dip angles). Eq. 2 allows us to predict the increase of the operational intensity in terms of by replacing by its value. The OI for the three wave-equations is given by:

  • Acoustic anisotropic: ,

  • VTI: ,

  • TTI: ,

and plotted as a function of on Fig. 10. Using the derived formula for the algorithmic operational intensity in terms of stencil size, we can now analyze the optimal performance for each equation with respect to a specific computer architecture. We are using the theoretical and measured hardware limitations reported by Andreolli et al. (2015) to demonstrate how the main algorithmic limitation shifts from being bandwidth-bound at low to compute-bound at high on a dual-socket Intel Xeon in Fig. 4 -  6 and an Intel Xeon Phi in Fig. 7 -  9.

It is of particular interest to note from Fig. 4 that a order stencil with provides just enough arithmetic load for the 3D acoustic equation solver to become compute-bound, while falls just short of the compute-bound region for the VTI algorithm. On the other hand a order stencil with is enough for the TTI algorithm to become compute-bound due to having a quadratic slope with respect to (Fig. 10) instead of a linear slope.

At this point, we can define , which is the minimum OI required for an algorithm to become compute-bound on a particular architecture, as the x-axis coordinate of the ridge point in Fig. 46 and  79. Note that the ridge point x-axis position changes between the two different architectures. This difference in compute-bound limit shows that a different spatial order discretization should be used on the two architecture to optimize hardware usage. As reported by Andreolli et al. (2015) the as derived from achievable peak rates is for the Intel Xeon and for the Intel Xeon Phi. This entails that while the acoustic anisotropic wave-equation and VTI are memory bound for discretizations up to order, the TTI equation reaches the compute bound region with even a order discretization.

Figure 4: Increase in algorithmic OI with increasing stencil sizes on a dual-socket Intel Xeon E5-2697 v2 (Andreolli et al., 2015) for a 3D acoustic kernel. The order stencil is coincident with the ridge point — the transition point from memory-bound to compute-bound computation.
Figure 5: Increase in algorithmic OI with increasing stencil sizes on a dual-socket Intel Xeon E5-2697 v2 (Andreolli et al., 2015) for a 3D VTI kernel. Similarly to the acoustic model, the order stencil is coincident with the ridge point.
Figure 6: Increase in algorithmic OI with increasing stencil sizes on a dual-socket Intel Xeon E5-2697 v2 (Andreolli et al., 2015) for a 3D TTI kernel. The order stencil is already compute-bound.
Figure 7: Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D acoustic kernel. Unlike the Xeon E5-2697, the order stencil is the smallest one to be compute-bound (vs order).
Figure 8: Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D VTI kernel. is the minimum compute-bound stencil. It is not equivalent to the acoustic on this architecture.
Figure 9: Increase in algorithmic OI with increasing stencil sizes on a Intel Xeon Phi 7120A co-processor (Andreolli et al., 2015) for a 3D TTI kernel. The order stencil is already compute-bound similarly to the Xeon E5-2697.

From the analytical expression derived we can now generalize the derivation of minimum OI values by plotting the simplified expressions for against known hardware OI limitations, as shown in Fig. 10. We obtain a theoretical prediction about the minimum spatial order required for each algorithm to provide enough arithmetic load to allow implementations to become compute-bound. Most importantly, Fig. 10 shows that the TTI wave-equation has a significantly steeper slope of , which indicates that it will saturate a given hardware for a much smaller spatial discretization than the acoustic wave or the VTI algorithm.

Moreover, assuming a spatial discretization order of , we can predict that on the Intel Xeon CPU we require a minimum order of for the acoustic wave solver, for VTI and for TTI. On the Nvidia GPU, with a slightly lower hardware , we require a minimum order of for the acoustic wave solver, for VTI and for TTI, while even larger stencils are required for the Intel Xeon Phi accelerator: a minimum order of for the acoustic wave solver, for VTI and for TTI. This derivation demonstrates that overall very large stencils are required for the acoustic anisotropic solver and VTI to fully utilize modern HPC hardware, and that even TTI requires at least order to be able to computationally saturate HPC architectures with a very high arithmetic throughput, like the Intel Xeon Phi.

Figure 10: Increase in OI with stencil size and machine-specific minimum OI values for all three hardware architectures considered in this paper.

5 Example: MADAGASCAR modelling kernel

We demonstrate our proposed performance model and its flexibility by applying it on a broadly used and benchmarked modelling kernel contained in Madagascar (Sergey et al., 2013). We are illustrating the ease to extend our method to a different wave-equation and by extension to any PDE solver. The code implements the 3D anisotropic elastic wave-equation and is described in (Robin M. Weiss, 2013). We are performing our analysis based on the space order, hardware and runtime described in (Robin M. Weiss, 2013). The governing equation considered is:

(3)

where is the density, is the component of the three dimensional wavefield displacement ( for ), is the source term,

is the strain tensor,

is the stress tensor and is the stiffness tensor. The equation is discretized with an order star stencil for the first order derivatives and a second order scheme in time and solves for all three components of . Eq. 3 uses Einstein notations meaning repeated indices represent summation:

(4)

From this equation and knowing the finite-difference scheme used we can already compute the minimum required bandwidth and operational intensity. We need to solve this equation for all three components of the wave at once as we have coupled equations in and . For a global estimate of the overall memory traffic, we need to account for loading and storing values of the displacement vector and loading values of . In case the stiffness tensor is constant in space the contribution of is independently of , which yields an overall data volume of . In the realistic physical configuration of a spatially varying stiffness tensor, we would estimate loading values of , leaving us with a data volume of . Finally we consider symmetries in the stiffness tensor are taken into account reducing the number of stiffness values to load to and leading to a data volume of .

The number of valuable FLOPs performed to update one grid point can be estimated by:

  • 9 first derivatives () : mult add

  • 9 sums for ( adds) and mult for =

  • 9 first derivatives and 9 sums =

  • 3 times 3 sums to update = .

The summation of all four contributions gives a total of 441 operations and by dividing by the memory traffic we obtain the operational intensity for variable stiffness and for constant stiffness:

(5)

Using the OI values derived above we can now quantify the results presented by Robin M. Weiss (2013) by interpreting their runtime results with respect to our performance measure. The achieved GFLOPS have been obtained on the basis of 1000 time steps with order spatial finite-differences and order temporal finite-differences. We interpret Fig. 11a) of Robin M. Weiss (2013) to give a run time of approximately seconds and a domain size of . We obtain with this parameter the following achieved performances:

(6)

where is the number of time steps, and is the run time.

Fig. 11 shows the calculated performance in relation to our predicted algorithmic bounds and . The use of a constant stiffness tensor puts the OI of the considered equation in the compute-bound region for the benchmarked GPU architecture (NVIDIA GTX480). Assuming a spatially varying stiffness tensor, we can calculate an achieved hardware utilization of based on the reported results, assuming an achievable peak memory bandwidth of , as reported by Konstantinidis and Cotronis (2015) and a maximum achievable performance of . Assuming (Andreolli et al., 2015) of peak performance is achievable, the roofline model suggests that there is still potential to double the performance of the code through software optimization. It is not possible to draw such a conclusion from traditional performance measures such as timings or scaling plots. This highlights the importance of a reliable performance model that can provide an absolute measure of performance in terms of the algorithm and the computer architecture.

Figure 11: Roofline model for the 3D elastic anisotropic kernel from (Robin M. Weiss, 2013) on a 480-core NVIDIA GTX480 GPU (with hardware specification from Konstantinidis and Cotronis (2015)).

6 Cost-benefit analysis

So far we discussed the design of finite-difference algorithms purely from a performance point of view without regard to the numerical accuracy and cost-to-solution. Now we discuss the impact of the discretization order on the achieved accuracy of the solution and how that, in turn, affects the wall clock time required for computation. To do so, we look at the numerical requirements of a time-stepping algorithm for the wave-equation. More specifically we concentrate on two properties, namely dispersion and stability, in the acoustic case. This analysis is extendable to more advanced wave-equations such as VTI and TTI with additional numerical analysis. The dispersion criteria and stability condition for the acoustic wave-equation is given by  (Wu et al., 1996; Lines et al., 1999):

(7)

where:

  • is the sum of the absolute values of the weights of the finite-difference scheme for the second time derivative of the wavefield;

  • is the sum of the absolute values of the weights of the finite-difference approximation of ;

  • is the maximum velocity;

  • is the maximum frequency of the source term that defines the minimum wavelength for a given minimum velocity ;

  • is the number of grid points per wavelength. The number of grid points per wavelength impacts the amount of dispersion (different wavelengths propagating at different velocities) generated by the finite-difference scheme. The lower the number, the higher the dispersion will be for a fixed discretization order.

These two conditions define the computational setup for a given source and physical model size. Knowing that increases with the spatial discretization order, Eq. 7 shows that higher discretization orders require a smaller time-step hence increasing the total number of time steps for a fixed final time and grid size. However, higher order discretizations also allow to use less grid points per wavelength (smaller ). A smaller number of grid points per wavelengths leads to a smaller overall computational domain as a fixed physical distance is represented by a coarser mesh and as the grid spacing has been increased, the critical time-step (maximum stable value) is also increased. Overall, high order discretizations have better computational parameters for a predetermined physical problem. From these two considerations, we can derive an absolute cost-to-solution estimation for a given model as a function of the discretization order for a fixed maximum frequency and physical model size. The following results are not experimental runtimes but estimations of the minimum achievable runtime assuming a perfect performance implementation. We use the following setup:

  • We fix the physical model size as 500 grid point in all three directions for a second order discretization (minimum grid size).

  • The number of grid points per wavelength is for a second order spatial discretization and for a 24th order discretization and varies linearly for intermediate orders.

  • The number of time steps is 1000 for the second order spatial discretization and computed according to the grid size/time step for other spatial orders.

The hypothetical numerical setup (with , second order time discretization) is summarized in Table 2. We combine the estimation of a full experimental run with the estimated optimal performance and obtain an estimation of the optimal time-to-solution for a fixed physical problem. The estimated runtime is the ratio of the total number of GFLOPs (multiply by the number of grid points and time steps) to the maximum achievable performance for this OI. Table  3 shows the estimated runtime assuming peak performance on two systems: a dual-socket Intel Xeon E5-2697 v2 and an Intel Xeon Phi 7120A co-processor.

Order
2nd order 12 6 1 0.5774 1.25e+08 1000
6th order 18.13 5 1.2 0.5637 7.24e+07 1024
12th order 21.22 4 1.5 0.6513 3.70e+07 887
18th order 22.68 3 2 0.8399 1.56e+07 688
24th order 23.57 2 3 1.2359 4.63e+06 468
Table 2: Cost-to-solution computational setup summary.

width=1 Order GFLOPs GFLOPS Xeon GFLOPS Phi Runtime Xeon Runtime Phi 2nd 1.375 2.75e+03 137.5 275 20s 10s 6th 2.875 3.414e+03 287.5 575 12s 6s 12th 5.125 2.691e+03 512.5 1025 6s 3s 18th 7.375 1.266e+03 737.5 1475 2s 1s 24th 9.625 3.337e+02 962.5 1925 1s 1s

Table 3: Cost-to-solution estimation for several spatial discretizations on fixed physical problem.

We see that by taking advantage of the roofline results in combination with the stability conditions, we obtain an estimate of the optimal cost-to-solution of an algorithm. It can be seen that higher order stencils lead to better hardware usage by lowering the wall-time-to-solution. These results, however, rely on mathematical results based on homogeneous velocity. In the case of an heterogenous model, high order discretizations may result in inaccurate, even though stable and non dispersive, solutions to the wave-equation. The choice of the discretization order should then be decided with more than just the performance in mind.

7 Conclusions

Implementing an optimising solver is generally a long and expensive process. Therefore, it is imperative to have a reliable estimate of the achievable peak performance, FLOPS, of an algorithm at both the design and optimised implementation stages of development.

The roofline model provides a readily understandable graphical tool, even for a non-specialist, to quickly assess and evaluate the computational effectiveness of a particular implementation of an algorithm. We have shown how the roofline model can be applied to finite-difference discretizations of the wave-equation commonly used in the geophysics community. Although the model is quite simple, it provides a reliable estimate of the peak performance achievable by a given finite-difference discretization regardless of the implementation. Not only does this aid the algorithm designer to decide between different discretization options but also gives solver developers an absolute measure of the optimality of a given implementation. The roofline model has also proved extremely useful in guiding further optimization strategies, since it highlights the limitations of a particular version of the code, and gives an indication of whether memory bandwidth optimisations, such as loop blocking techniques, or FLOPs optimisations, such as SIMD vectorisation, are likely to improve results.

However, one should always be mindful of the fact that it does not provide a complete measure of performance and should be complemented with other metrics, such as time to solution or strong scaling metrics, to establish a full understanding of the achieved performance of a particular algorithmic choice and implementation.

8 Acknowledgements

This work was financially supported in part by the Natural Sciences and Engineering Research Council of Canada Collaborative Research and Development Grant DNOISE II (CDRP J 375142-08) and the Imperial College London Intel Parallel Computing Centre. This research was carried out as part of the SINBAD II project with the support of the member organizations of the SINBAD Consortium.

References

  • Williams et al. (2009) S. Williams, A. Waterman, D. Patterson, The Roofline model offers insight on how to improve the performance of software and hardware., communications of the ACM 52 (4).
  • Asanovic et al. (2006) K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, et al., The landscape of parallel computing research: A view from berkeley, Tech. Rep., Technical Report UCB/EECS-2006-183, EECS Department, University of California, Berkeley, 2006.
  • Patterson and Hennessy (2007) D. A. Patterson, J. L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3rd edn., ISBN 0123706068, 9780123706065, 2007.
  • Williams and Patterson (2008) S. Williams, D. Patterson, Roofline Performance Model, http://crd.lbl.gov/assets/pubspresos/parlab08-roofline-talk.pdf, 2008.
  • Epicoco et al. (2014) I. Epicoco, S. Mocavero, F. Macchia, G. Aloisio, The roofline model for oceanic climate applications, in: High Performance Computing & Simulation (HPCS), 2014 International Conference on, IEEE, 732–737, 2014.
  • Chan et al. (2013) C. Chan, D. Unat, M. Lijewski, W. Zhang, J. Bell, J. Shalf, Software design space exploration for exascale combustion co-design, in: International Supercomputing Conference, Springer, 196–212, 2013.
  • Andreolli et al. (2014)

    C. Andreolli, P. Thierry, L. Borges, C. Yount, G. Skinner, Genetic algorithm based auto-tuning of seismic applications on multi and manycore computers, in: EAGE Workshop on High Performance Computing for Upstream, 2014.

  • Datta and Yelick (2009) K. Datta, K. A. Yelick, Auto-tuning stencil codes for cache-based multicore platforms, Ph.D. thesis, University of California, Berkeley, 2009.
  • Sato et al. (2009) Y. Sato, R. Nagaoka, A. Musa, R. Egawa, H. Takizawa, K. Okabe, H. Kobayashi, Performance tuning and analysis of future vector processors based on the roofline model, in: Proceedings of the 10th workshop on MEmory performance: DEaling with Applications, systems and architecture, ACM, 7–14, 2009.
  • Kim et al. (2011) K.-H. Kim, K. Kim, Q.-H. Park, Performance analysis and optimization of three-dimensional FDTD on GPU using roofline model, Computer Physics Communications 182 (6) (2011) 1201–1207.
  • Lo et al. (2014) Y. J. Lo, S. Williams, B. Van Straalen, T. J. Ligocki, M. J. Cordery, N. J. Wright, M. W. Hall, L. Oliker, Roofline model toolkit: A practical tool for architectural and program analysis, in: International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Springer, 129–148, 2014.
  • Ilic et al. (2014) A. Ilic, F. Pratas, L. Sousa, Cache-aware roofline model: Upgrading the loft, IEEE Computer Architecture Letters 13 (1) (2014) 21–24.
  • Barba and Yokota (2013) L. A. Barba, R. Yokota, How will the fast multipole method fare in the exascale era, SIAM News 46 (6) (2013) 1–3.
  • Lai and Seznec (2013) J. Lai, A. Seznec, Performance upper bound analysis and optimization of SGEMM on Fermi and Kepler GPUs, in: Code Generation and Optimization (CGO), 2013 IEEE/ACM International Symposium on, IEEE, 1–10, 2013.
  • Wahib and Maruyama (2014) M. Wahib, N. Maruyama, Scalable kernel fusion for memory-bound GPU applications, in: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, IEEE Press, 191–202, 2014.
  • Hofmann et al. (2015a) J. Hofmann, J. Eitzinger, D. Fey, Execution-cache-memory performance model: Introduction and validation, arXiv preprint arXiv:1509.03118 .
  • Duplyakin et al. (2016)

    D. Duplyakin, J. Brown, R. Ricci, Active Learning in Performance Analysis, in: Cluster Computing (CLUSTER), 2016 IEEE International Conference on, IEEE, 182–191, 2016.

  • Stengel et al. (2015) H. Stengel, J. Treibig, G. Hager, G. Wellein, Quantifying performance bottlenecks of stencil computations using the execution-cache-memory model, in: Proceedings of the 29th ACM on International Conference on Supercomputing, ACM, 207–216, 2015.
  • Datta et al. (2009) K. Datta, S. Kamil, S. Williams, L. Oliker, J. Shalf, K. Yelick, Optimization and performance modeling of stencil computations on modern microprocessors, SIAM review 51 (1) (2009) 129–159.
  • Hammer et al. (2015) J. Hammer, G. Hager, J. Eitzinger, G. Wellein, Automatic loop kernel analysis and performance modeling with KernCraft, in: Proceedings of the 6th International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computing Systems, ACM, 4, 2015.
  • Narayanan et al. (2010) S. H. K. Narayanan, B. Norris, P. D. Hovland, Generating performance bounds from source code, in: Parallel Processing Workshops (ICPPW), 2010 39th International Conference on, IEEE, 197–206, 2010.
  • Unat et al. (2015) D. Unat, C. Chan, W. Zhang, S. Williams, J. Bachan, J. Bell, J. Shalf, ExaSAT: An exascale co-design tool for performance modeling, The International Journal of High Performance Computing Applications 29 (2) (2015) 209–232.
  • Rahman et al. (2011) S. M. F. Rahman, Q. Yi, A. Qasem, Understanding stencil code performance on multicore architectures, in: Proceedings of the 8th ACM International Conference on Computing Frontiers, ACM, 30, 2011.
  • Hofmann et al. (2015b) J. Hofmann, D. Fey, M. Riedmann, J. Eitzinger, G. Hager, G. Wellein, Performance analysis of the Kahan-enhanced scalar product on current multicore processors, in: International Conference on Parallel Processing and Applied Mathematics, Springer, 63–73, 2015b.
  • Andreolli et al. (2015) C. Andreolli, P. Thierry, L. Borges, G. Skinner, C. Yount, Chapter 23 - Characterization and Optimization Methodology Applied to Stencil Computations, in: J. Reinders, J. Jeffers (Eds.), High Performance Parallelism Pearls, Morgan Kaufmann, Boston, ISBN 978-0-12-802118-7, 377 – 396, doi:http://dx.doi.org/10.1016/B978-0-12-802118-7.00023-6, URL http://www.sciencedirect.com/science/article/pii/B9780128021187000236, 2015.
  • McCalpin (1995) J. D. McCalpin, Memory Bandwidth and Machine Balance in Current High Performance Computers, IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter (1995) 19–25.
  • McCalpin (2007) J. D. McCalpin, STREAM: Sustainable Memory Bandwidth in High Performance Computers, Tech. Rep., University of Virginia, Charlottesville, Virginia, URL http://www.cs.virginia.edu/stream/, a continually updated technical report. http://www.cs.virginia.edu/stream/, 1991-2007.
  • Dongarra (1988) J. Dongarra, The LINPACK Benchmark: An Explanation, in: Proceedings of the 1st International Conference on Supercomputing, Springer-Verlag, London, UK, UK, ISBN 3-540-18991-2, 456–474, URL http://dl.acm.org/citation.cfm?id=647970.742568, 1988.
  • Liu et al. (2009) W. Liu, K. Bube, L. Zhang, K. Nihei, Stable reverse-time migration in variable-tilt TI media, in: 71st EAGE Conference and Exhibition incorporating SPE EUROPEC 2009, 2009.
  • Sergey et al. (2013) F. Sergey, P. SAva, I. Vlad, Y. Liu, V. Bashkardin, Madagascar: open-source software project for multidimensional data analysis and reproducible computational experiments, 2013.
  • Robin M. Weiss (2013) J. S. Robin M. Weiss, Solving 3D anisotropic elastic wave equations on parallel GPU devices., Geophysics 78 (2), doi:http://dx.doi.org/10.1190/geo2012-0063.1.
  • Konstantinidis and Cotronis (2015) E. Konstantinidis, Y. Cotronis, A Practical Performance Model for Compute and Memory Bound GPU Kernels, in: 2015 23rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, ISSN 1066-6192, 651–658, doi:10.1109/PDP.2015.51, 2015.
  • Wu et al. (1996) W. Wu, L. R. Lines, H. Lu, Analysis of higher‐order, finite‐difference schemes in 3-D reverse‐time migration, GEOPHYSICS 61 (3) (1996) 845–856, doi:10.1190/1.1444009, URL http://dx.doi.org/10.1190/1.1444009.
  • Lines et al. (1999) L. Lines, R. Slawinski, R. Bording, A recipe for stability of finite-difference wave-equation computations, GEOPHYSICS 64 (3) (1999) 967–969, doi:10.1190/1.1444605, http://library.seg.org/doi/pdf/10.1190/1.1444605.

Appendix A Wave-equations

In the following equations is the pressure field in the case of acoustic anisotropic while are the split wavefields for the anisotropic case. We denote by and respectively the value of for all grid points at time . The physical parameters are the square slowness, the Thomsen parameters and the tilt and azimuth. The main problem with the TTI case is the presence of transient functions (, ) known to be extremely expensive to compute (typically about an order of magnitude more expensive than an add or multiply). Here we will assume these functions are precomputed and come from a look-up table, thus only involving memory traffic In the acoustic anisotropic case the governing equations are:

(8)

In the anisotropic case we consider the equations describe in  (Liu et al., 2009). More advanced formulation have been developed however this equation allow an explicit formulation on the operational intensity and simple stencil expression. It is the formulation we are also using in our code base. In the VTI case the governing equations are:

(9)

For TTI the governing equations are:

(10)

where the rotated differential operators are defined as

(11)