1 Introduction
The increasing complexity of modern computer architectures means that developers are having to work much harder at implementing and optimising scientific modelling codes for the software performance to keep pace with the increase in performance of the hardware. This trend is driving a further specialisation in skills such that the geophysicist, numerical analyst and software developer are increasingly unlikely to be the same person. One problem this creates is that the numerical analyst makes algorithmic choices at the mathematical level that define the scope of possible software implementations and optimizations available to the software developer. Additionally, even for an expert software developer it can be difficult to know what are the right kind of optimisations that should be considered, or even when an implementation is ”good enough” and optimisation work should stop. It is common that performance results are presented relative to a previously existing implementation, but such a relative measure of performance is wholly inadequate as the reference implementation might well be truly terrible. One way to mitigate this issue is to establish a reliable performance model that allows a numerical analyst to make reliable predictions of how well a numerical method would perform on a given computer architecture, before embarking upon potentially long and expensive implementation and optimization phases. The availability of a reliable performance model also saves developer effort as it both informs the developer on what kind of optimisations are beneficial, and when the maximum expected performance has been reached and optimisation work should stop.
Performance models such as the roofline model by (Williams et al., 2009) help establish statistics for best case performance — to evaluate the performance of a code in terms of hardware utilization (e.g. percentage of peak floating point performance) instead of a relative speedup. Performance models that establish algorithmic optimality and provide a measure of hardware utilization are increasingly used to determine effective algorithmic changes that reliably increase performance across a wide variety of algorithms (Asanovic et al., 2006). However, for many scientific codes used in practice, wholesale algorithmic changes, such as changing the spatial discretization or the governing equations themselves, are often highly invasive and require a costly software rewrite. Establishing a detailed and predictive performance model for the various algorithmic choices is therefore imperative when designing the nextgeneration of industry scale codes.
We establish a theoretical performance model for explicit waveequation solvers as used in full waveform inversion (FWI) and reverse time migration (RTM). We focus on a set of widely used equations and establish lower bounds on the degree of the spatial discretization required to achieve optimal hardware utilization on a set of well known modern computer architectures. Our theoretical prediction of performance limitations may then be used to inform algorithmic choice of future implementations and provides an absolute measure of realizable performance against which implementations may be compared to demonstrate their computational efficiency.
For the purpose of this paper we will only consider explicit time stepping algorithms based on a second order time discretization. Extension to higher order time stepping scheme will be briefly discussed at the end. The reason we only consider explicit time stepping is that it does not involve any matrix inversion, but only scalar product and additions making the theoretical computation of the performance bounds possible. The performance of other classical algorithm such as matrix vector products or FFT as described by
(Patterson and Hennessy, 2007) has been included for illustrative purposes.2 Introduction to stencil computation
A stencil algorithm is designed to update or compute the value of a field in one spatial location according to the neighbouring ones. In the context of waveequation solver, the stencil is defined by the support (grid locations) and the coefficients of the finitedifference scheme. We illustrate the stencil for the Laplacian, defining the stencil of the acoustic waveequation (Eq. 8), and for the rotated Laplacian used in the anisotropic waveequation (Eq. 10, 11) on Fig. 1  2. The points coloured in blue are the value loaded while the point coloured in red correspond to a written value.
The implementation of a time stepping algorithm for a wavefield , solution of the acoustic waveequation (Eq. 8) is straightforward from the representation of the stencil. We do not include the absorbing boundary conditions (ABC) as depending on the choice of implementation it will either be part of the stencil or be decoupled and treated separately.
In Algorithm 1, is the set of all grid positions in the computational domain, are the local indices , are the indices of the stencil positions for the centre position and is the number of time steps and is the source term decoupled from the stencil. In the following we will concentrate on the stencil itself, as the loops in space and time do not impact the theoretical performance model we introduce. The roofline model is solely based on the amount of input/output (blue/red in the stencils) and arithmetic operations (number of sums and multiplication) required to update one grid point, and we will prove that the optimal reference performance is independent of the size of the domain (number of grid points) and of the number of time steps.
Notes on parallelization:
Using a parallel framework to improve an existing code is one of the most used tool in the current stencil computation community. It is however crucial to understand that this is not an algorithmic improvement from the operational intensity. We will prove that the algorithmic efficiency of a stencil code is independent of the size of the model, and will therefore not be impacted by a domaindecomposition like parallelization via OpenMP or MPI. The results shown in the following are purely dedicated to help the design of a code from an algorithmic point of view, while parallelization will only impact the performance of the implemented code by improving the hardware usage.
3 Roofline Performance Analysis
The roofline model is a performance analysis framework designed to evaluate the floating point performance of an algorithm by relating it to memory bandwidth usage (Williams et al., 2009). It has proved to be very popular because it provides a readily comprehensible performance metric to interpret runtime performance of a particular implementation according to the achievable optimal hardware utilization for a given architecture (Williams and Patterson, 2008). This model has been applied to reallife codes in the past to analyze and report performance including oceanic climate models Epicoco et al. (2014), combustion modeling Chan et al. (2013) and even seismic imaging Andreolli et al. (2014). It has also been used to evaluate the effectiveness of implementationtime optimizations like autotuning Datta and Yelick (2009), or cacheblocking on specific hardware platforms like vector processors Sato et al. (2009) and GPUs Kim et al. (2011). Tools are available to plot the machinespecific parameters of the roofline model automatically Lo et al. (2014). When more information about the target hardware is available, it is possible to refine the roofline model into the cacheaware roofline model which gives more accurate predictions of performance Ilic et al. (2014). The analysis presented here can be extended to the cacheaware roofline model but in order to keep it general, we restrict it to the general roofline model.
The roofline model has also been used to compare different types of basic numerical operations to predict their performance and feasibility on future systems Barba and Yokota (2013), quite similar to this paper. However, in this paper, instead of comparing stencil computation to other numerical methods, we carry out a similar comparison between numerical implementations using different stencil sizes. This provides an upperbound of performance on any hardware platform at a purely conceptual stage, long before the implementation of the algorithm.
Other theoretical models to predict upperbound performance of generic code on hypothetical hardware have been built Lai and Seznec (2013); Wahib and Maruyama (2014); Hofmann et al. (2015a); Duplyakin et al. (2016) but being too broad in scope, can not be used to drive algorithmic choice like choosing the right discretization order. Some of these models have also been applied to stencil codes Stengel et al. (2015); Datta et al. (2009), however the analysis was of a specific implementation and could not be applied in general. There are many tools to perform performance prediction at the codelevel Hammer et al. (2015); Narayanan et al. (2010); Unat et al. (2015); Rahman et al. (2011). However, any tool that predicts performance based on a code is analyzing the implementation and not the algorithm in general. Although performance modeling is a deep and mature field, most work is restricted to modeling the performance of specific implementations in code. Hofmann et al. makes a comparison quite similar to the one we do here where two algorithmic choices for the same problem are being compared with a performance model.
In this section we demonstrate how one creates a roofline model for a given computer architecture, and derives the operational intensity for a given numerical algorithm. This establishes the theoretical upperbound for the performance of a specific algorithm on that architecture. A general roofline performance analysis consists of three steps:

The memory bandwidth, bytes per second, and the peak number of floating point operations per second (FLOPS) of the computer architecture are established either from the manufacturers specification or through measurement using standard benchmarks.

The operational intensity (OI) of the algorithm is established by calculating the ratio of the number of floating point operations performed to memory traffic, FLOPs per byte. This number characterizes the algorithmic choices that affect performance on a computer system. In combination with the measured memory bandwidth and peak performance of a computer architecture, this provides a reliable estimate of the maximum achievable performance.

The solver is benchmarked in order to establish the achieved performance. A roofline plot can be created to illustrate how the achieved performance compares to the maximum performance predicted by the roofline for the algorithms OI. This establishes a measure of optimality of the implementation, or alternatively the maximum possible gain from further optimization of the software.
3.1 Establishing the Roofline
The roofline model characterises a computer architecture using two parameters: the maximum memory bandwidth, , in units of ; and the peak FLOPS achievable by the hardware, . The maximally achievable performance is modelled as:
(1) 
where is the OI.
As illustrated in Fig. 3 this limitation defines two distinct regions:

Memorybound: The region left of the ridge point constitutes algorithms that are limited by the amount of data coming into the CPU from memory. Memorybound codes typically prioritise caching optimizations, such as data reordering and cache blocking.

Computebound: The region right of the ridge point contains algorithms that are limited by the maximum performance of the arithmetic units in the CPU and thus defines the maximum achievable performance of the given architecture. Computebound codes typically prioritise vectorization to increase throughput.
It is worth noting that changing from single to doubleprecision arithmetic halves the OI because the volume of memory that must be transferred between the main memory and the CPU is doubled. The peak performance will be impacted as well, since the volume of data and the number of concurrently used floating point units (FPU) changes. As commonly employed by industry, we assume single precision arithmetic for the examples presented here, but it is straightforward to extend to double precision.
Andreolli et al. (2015) illustrates an example of deriving the theoretical performance for a system that consists of two Intel Xeon E52697 v2 (2SE5) with 12 cores per CPU each running at 2.7 Ghz without turbo mode. Since these processors support 256bit SIMD instructions they can process eight singleprecision operations per clockcycle (SP FP). Further, taking into account the use of Fused MultiplyAdd (FMA) operations (two per cycle), this yields
Clearly, this assumes full utilization of two parallel pipelines for Add and Multiply operations.
A similar estimate for the peak memory bandwidth can be made from the memory frequency (), the number of channels () and the number of bytes per channel () and the number of CPUs () to give .
It is important to note here that there is an instruction execution overhead that the above calculations did not take into account and therefore these theoretical peak numbers are not achievable ( is achievable in practice (Andreolli et al., 2015)). For this reason, two benchmark algorithms, STREAM TRIAD for memory bandwidth (McCalpin, 1995, 2007) and LINPACK for floating point performance (Dongarra, 1988), are often used to measure the practical limits of a particular hardware platform. These algorithms are known to achieve a very high percentage of the peak values and are thus indicative of practical hardware limitations.
3.2 Performance Model
The key measure to using the roofline analysis as a guiding tool for algorithmic design decisions and implementation optimization is the operational intensity, , as it relates the number of FLOPs to the number of bytes moved to and from RAM. clearly does not capture many important details about the implementation such as numerical accuracy or time to solution. Therefore, it is imperative to look at in combination with these measures when making algorithmic choices.
Here we analyze the algorithmic bounds of a set of finitedifference discretizations of the waveequation using different stencils and spatial orders. We therefore define algorithmic operational intensity in terms of the total number of FLOPs required to compute a solution, and we assume that our hypothetical system has a cache with infinite size and no latency inducing zero redundancy in memory traffic (Williams and Patterson, 2008). This acts as a theoretical upper bound for the performance of any conceivable implementation.
We furthermore limit our theoretical investigation to analysing a single time step as an indicator of overall achievable performance. This assumption allows us to generalize the total number of bytes in terms of the number of spatially dependant variables (e.g. wavefields, physical properties) used in the discretized equation as , where is the number of variables whose value is being loaded, is the number of variables whose value is being stored, is the number of grid points and is the number of bytes per singleprecision floating point value. The term arises from the fact that most computer architectures will load a cache line before it gets overwritten completely. However, some computer architectures, such as the Intel Xeon Phi, have support for stream stores, so that values can be written directly to memory without first loading the associated cache line, in which case the expression for the total data movement becomes . It is important to note here that limiting the analysis to a single time step limits the scope of the infinite caching assumption above.
Since we have assumed a constant grid size across all spatially dependant variables, we can now parametrize the number of FLOPs to be computed per time step as , where is a function that defines the number of flops performed to update one grid point in terms of the stencil size used to discretize spatial derivatives. Additional terms can be added corresponding to source terms and boundary conditions but they are a small proportion of the time step in general and are neglected here for simplicity. This gives us the following expression for OI as a function of , :
(2) 
4 Operational intensity for finitedifferences
We derive a general model for the operational intensity of waveequation PDEs solvers with finitedifference discretizations using explicit time stepping and apply it to three different waveequation formulations commonly used in the oil and gas exploration community: an acoustic anisotropic waveequation; vertical transverse isotropic (VTI); and tilted transversely isotropic (TTI) (Liu et al., 2009). The theoretical operational intensity for the 3D discretized equations will be calculated as a function of the finitedifference stencil size , which allows us to make predictions about the minimum discretization order required for each algorithm to reach the computebound regime for a target computer architecture. For completeness we describe the equations in Appendix A.
4.1 Stencil operators
As a baseline for the finitedifference discretization, we consider the use of a 1D symmetric stencil of size , which uses values of the discretized variable to compute any spatial derivatives enforcing a fixed support for all derivatives. Other choices of discretization are possible, such as choosing the stencil for the first derivative and applying it iteratively to obtain high order derivatives. Our analysis will still be valid but require a rewrite of the following atomic operation count. The number of FLOPs used for the three types of derivatives involved in our equation are calculated as:

first order derivative with respect to ():

second order derivative with respect to ():

second order cross derivative with respect to ():
where in 3D, correspond to the three dimensions and is the discretized field.
Equation  mult  add  duplicates  

Acoustic:  0  0  3  5  
VTI:  0  0  5  5  
TTI:  0  44  17 
Computing the total wavefield memory volume for each equation we have for Acoustic (load velocity, two previous time steps and write the new time step), for VTI (load velocity, two anisotropy parameters, two previous time steps for two wavefields and write the new time step for the two wavefields) and for TTI (VTI plus 6 precomputed cos/sin of the tilt and dip angles). Eq. 2 allows us to predict the increase of the operational intensity in terms of by replacing by its value. The OI for the three waveequations is given by:

Acoustic anisotropic: ,

VTI: ,

TTI: ,
and plotted as a function of on Fig. 10. Using the derived formula for the algorithmic operational intensity in terms of stencil size, we can now analyze the optimal performance for each equation with respect to a specific computer architecture. We are using the theoretical and measured hardware limitations reported by Andreolli et al. (2015) to demonstrate how the main algorithmic limitation shifts from being bandwidthbound at low to computebound at high on a dualsocket Intel Xeon in Fig. 4  6 and an Intel Xeon Phi in Fig. 7  9.
It is of particular interest to note from Fig. 4 that a order stencil with provides just enough arithmetic load for the 3D acoustic equation solver to become computebound, while falls just short of the computebound region for the VTI algorithm. On the other hand a order stencil with is enough for the TTI algorithm to become computebound due to having a quadratic slope with respect to (Fig. 10) instead of a linear slope.
At this point, we can define , which is the minimum OI required for an algorithm to become computebound on a particular architecture, as the xaxis coordinate of the ridge point in Fig. 4  6 and 7  9. Note that the ridge point xaxis position changes between the two different architectures. This difference in computebound limit shows that a different spatial order discretization should be used on the two architecture to optimize hardware usage. As reported by Andreolli et al. (2015) the as derived from achievable peak rates is for the Intel Xeon and for the Intel Xeon Phi. This entails that while the acoustic anisotropic waveequation and VTI are memory bound for discretizations up to order, the TTI equation reaches the compute bound region with even a order discretization.
From the analytical expression derived we can now generalize the derivation of minimum OI values by plotting the simplified expressions for against known hardware OI limitations, as shown in Fig. 10. We obtain a theoretical prediction about the minimum spatial order required for each algorithm to provide enough arithmetic load to allow implementations to become computebound. Most importantly, Fig. 10 shows that the TTI waveequation has a significantly steeper slope of , which indicates that it will saturate a given hardware for a much smaller spatial discretization than the acoustic wave or the VTI algorithm.
Moreover, assuming a spatial discretization order of , we can predict that on the Intel Xeon CPU we require a minimum order of for the acoustic wave solver, for VTI and for TTI. On the Nvidia GPU, with a slightly lower hardware , we require a minimum order of for the acoustic wave solver, for VTI and for TTI, while even larger stencils are required for the Intel Xeon Phi accelerator: a minimum order of for the acoustic wave solver, for VTI and for TTI. This derivation demonstrates that overall very large stencils are required for the acoustic anisotropic solver and VTI to fully utilize modern HPC hardware, and that even TTI requires at least order to be able to computationally saturate HPC architectures with a very high arithmetic throughput, like the Intel Xeon Phi.
5 Example: MADAGASCAR modelling kernel
We demonstrate our proposed performance model and its flexibility by applying it on a broadly used and benchmarked modelling kernel contained in Madagascar (Sergey et al., 2013). We are illustrating the ease to extend our method to a different waveequation and by extension to any PDE solver. The code implements the 3D anisotropic elastic waveequation and is described in (Robin M. Weiss, 2013). We are performing our analysis based on the space order, hardware and runtime described in (Robin M. Weiss, 2013). The governing equation considered is:
(3)  
where is the density, is the component of the three dimensional wavefield displacement ( for ), is the source term,
is the strain tensor,
is the stress tensor and is the stiffness tensor. The equation is discretized with an order star stencil for the first order derivatives and a second order scheme in time and solves for all three components of . Eq. 3 uses Einstein notations meaning repeated indices represent summation:(4)  
From this equation and knowing the finitedifference scheme used we can already compute the minimum required bandwidth and operational intensity. We need to solve this equation for all three components of the wave at once as we have coupled equations in and . For a global estimate of the overall memory traffic, we need to account for loading and storing values of the displacement vector and loading values of . In case the stiffness tensor is constant in space the contribution of is independently of , which yields an overall data volume of . In the realistic physical configuration of a spatially varying stiffness tensor, we would estimate loading values of , leaving us with a data volume of . Finally we consider symmetries in the stiffness tensor are taken into account reducing the number of stiffness values to load to and leading to a data volume of .
The number of valuable FLOPs performed to update one grid point can be estimated by:

9 first derivatives () : mult add

9 sums for ( adds) and mult for =

9 first derivatives and 9 sums =

3 times 3 sums to update = .
The summation of all four contributions gives a total of 441 operations and by dividing by the memory traffic we obtain the operational intensity for variable stiffness and for constant stiffness:
(5)  
Using the OI values derived above we can now quantify the results presented by Robin M. Weiss (2013) by interpreting their runtime results with respect to our performance measure. The achieved GFLOPS have been obtained on the basis of 1000 time steps with order spatial finitedifferences and order temporal finitedifferences. We interpret Fig. 11a) of Robin M. Weiss (2013) to give a run time of approximately seconds and a domain size of . We obtain with this parameter the following achieved performances:
(6)  
where is the number of time steps, and is the run time.
Fig. 11 shows the calculated performance in relation to our predicted algorithmic bounds and . The use of a constant stiffness tensor puts the OI of the considered equation in the computebound region for the benchmarked GPU architecture (NVIDIA GTX480). Assuming a spatially varying stiffness tensor, we can calculate an achieved hardware utilization of based on the reported results, assuming an achievable peak memory bandwidth of , as reported by Konstantinidis and Cotronis (2015) and a maximum achievable performance of . Assuming (Andreolli et al., 2015) of peak performance is achievable, the roofline model suggests that there is still potential to double the performance of the code through software optimization. It is not possible to draw such a conclusion from traditional performance measures such as timings or scaling plots. This highlights the importance of a reliable performance model that can provide an absolute measure of performance in terms of the algorithm and the computer architecture.
6 Costbenefit analysis
So far we discussed the design of finitedifference algorithms purely from a performance point of view without regard to the numerical accuracy and costtosolution. Now we discuss the impact of the discretization order on the achieved accuracy of the solution and how that, in turn, affects the wall clock time required for computation. To do so, we look at the numerical requirements of a timestepping algorithm for the waveequation. More specifically we concentrate on two properties, namely dispersion and stability, in the acoustic case. This analysis is extendable to more advanced waveequations such as VTI and TTI with additional numerical analysis. The dispersion criteria and stability condition for the acoustic waveequation is given by (Wu et al., 1996; Lines et al., 1999):
(7)  
where:

is the sum of the absolute values of the weights of the finitedifference scheme for the second time derivative of the wavefield;

is the sum of the absolute values of the weights of the finitedifference approximation of ;

is the maximum velocity;

is the maximum frequency of the source term that defines the minimum wavelength for a given minimum velocity ;

is the number of grid points per wavelength. The number of grid points per wavelength impacts the amount of dispersion (different wavelengths propagating at different velocities) generated by the finitedifference scheme. The lower the number, the higher the dispersion will be for a fixed discretization order.
These two conditions define the computational setup for a given source and physical model size. Knowing that increases with the spatial discretization order, Eq. 7 shows that higher discretization orders require a smaller timestep hence increasing the total number of time steps for a fixed final time and grid size. However, higher order discretizations also allow to use less grid points per wavelength (smaller ). A smaller number of grid points per wavelengths leads to a smaller overall computational domain as a fixed physical distance is represented by a coarser mesh and as the grid spacing has been increased, the critical timestep (maximum stable value) is also increased. Overall, high order discretizations have better computational parameters for a predetermined physical problem. From these two considerations, we can derive an absolute costtosolution estimation for a given model as a function of the discretization order for a fixed maximum frequency and physical model size. The following results are not experimental runtimes but estimations of the minimum achievable runtime assuming a perfect performance implementation. We use the following setup:

We fix the physical model size as 500 grid point in all three directions for a second order discretization (minimum grid size).

The number of grid points per wavelength is for a second order spatial discretization and for a 24th order discretization and varies linearly for intermediate orders.

The number of time steps is 1000 for the second order spatial discretization and computed according to the grid size/time step for other spatial orders.
The hypothetical numerical setup (with , second order time discretization) is summarized in Table 2. We combine the estimation of a full experimental run with the estimated optimal performance and obtain an estimation of the optimal timetosolution for a fixed physical problem. The estimated runtime is the ratio of the total number of GFLOPs (multiply by the number of grid points and time steps) to the maximum achievable performance for this OI. Table 3 shows the estimated runtime assuming peak performance on two systems: a dualsocket Intel Xeon E52697 v2 and an Intel Xeon Phi 7120A coprocessor.
Order  

2nd order  12  6  1  0.5774  1.25e+08  1000 
6th order  18.13  5  1.2  0.5637  7.24e+07  1024 
12th order  21.22  4  1.5  0.6513  3.70e+07  887 
18th order  22.68  3  2  0.8399  1.56e+07  688 
24th order  23.57  2  3  1.2359  4.63e+06  468 
We see that by taking advantage of the roofline results in combination with the stability conditions, we obtain an estimate of the optimal costtosolution of an algorithm. It can be seen that higher order stencils lead to better hardware usage by lowering the walltimetosolution. These results, however, rely on mathematical results based on homogeneous velocity. In the case of an heterogenous model, high order discretizations may result in inaccurate, even though stable and non dispersive, solutions to the waveequation. The choice of the discretization order should then be decided with more than just the performance in mind.
7 Conclusions
Implementing an optimising solver is generally a long and expensive process. Therefore, it is imperative to have a reliable estimate of the achievable peak performance, FLOPS, of an algorithm at both the design and optimised implementation stages of development.
The roofline model provides a readily understandable graphical tool, even for a nonspecialist, to quickly assess and evaluate the computational effectiveness of a particular implementation of an algorithm. We have shown how the roofline model can be applied to finitedifference discretizations of the waveequation commonly used in the geophysics community. Although the model is quite simple, it provides a reliable estimate of the peak performance achievable by a given finitedifference discretization regardless of the implementation. Not only does this aid the algorithm designer to decide between different discretization options but also gives solver developers an absolute measure of the optimality of a given implementation. The roofline model has also proved extremely useful in guiding further optimization strategies, since it highlights the limitations of a particular version of the code, and gives an indication of whether memory bandwidth optimisations, such as loop blocking techniques, or FLOPs optimisations, such as SIMD vectorisation, are likely to improve results.
However, one should always be mindful of the fact that it does not provide a complete measure of performance and should be complemented with other metrics, such as time to solution or strong scaling metrics, to establish a full understanding of the achieved performance of a particular algorithmic choice and implementation.
8 Acknowledgements
This work was financially supported in part by the Natural Sciences and Engineering Research Council of Canada Collaborative Research and Development Grant DNOISE II (CDRP J 37514208) and the Imperial College London Intel Parallel Computing Centre. This research was carried out as part of the SINBAD II project with the support of the member organizations of the SINBAD Consortium.
References
 Williams et al. (2009) S. Williams, A. Waterman, D. Patterson, The Roofline model offers insight on how to improve the performance of software and hardware., communications of the ACM 52 (4).
 Asanovic et al. (2006) K. Asanovic, R. Bodik, B. C. Catanzaro, J. J. Gebis, P. Husbands, K. Keutzer, D. A. Patterson, W. L. Plishker, J. Shalf, S. W. Williams, et al., The landscape of parallel computing research: A view from berkeley, Tech. Rep., Technical Report UCB/EECS2006183, EECS Department, University of California, Berkeley, 2006.
 Patterson and Hennessy (2007) D. A. Patterson, J. L. Hennessy, Computer Organization and Design: The Hardware/Software Interface, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 3rd edn., ISBN 0123706068, 9780123706065, 2007.
 Williams and Patterson (2008) S. Williams, D. Patterson, Roofline Performance Model, http://crd.lbl.gov/assets/pubspresos/parlab08rooflinetalk.pdf, 2008.
 Epicoco et al. (2014) I. Epicoco, S. Mocavero, F. Macchia, G. Aloisio, The roofline model for oceanic climate applications, in: High Performance Computing & Simulation (HPCS), 2014 International Conference on, IEEE, 732–737, 2014.
 Chan et al. (2013) C. Chan, D. Unat, M. Lijewski, W. Zhang, J. Bell, J. Shalf, Software design space exploration for exascale combustion codesign, in: International Supercomputing Conference, Springer, 196–212, 2013.

Andreolli et al. (2014)
C. Andreolli, P. Thierry, L. Borges, C. Yount, G. Skinner, Genetic algorithm based autotuning of seismic applications on multi and manycore computers, in: EAGE Workshop on High Performance Computing for Upstream, 2014.
 Datta and Yelick (2009) K. Datta, K. A. Yelick, Autotuning stencil codes for cachebased multicore platforms, Ph.D. thesis, University of California, Berkeley, 2009.
 Sato et al. (2009) Y. Sato, R. Nagaoka, A. Musa, R. Egawa, H. Takizawa, K. Okabe, H. Kobayashi, Performance tuning and analysis of future vector processors based on the roofline model, in: Proceedings of the 10th workshop on MEmory performance: DEaling with Applications, systems and architecture, ACM, 7–14, 2009.
 Kim et al. (2011) K.H. Kim, K. Kim, Q.H. Park, Performance analysis and optimization of threedimensional FDTD on GPU using roofline model, Computer Physics Communications 182 (6) (2011) 1201–1207.
 Lo et al. (2014) Y. J. Lo, S. Williams, B. Van Straalen, T. J. Ligocki, M. J. Cordery, N. J. Wright, M. W. Hall, L. Oliker, Roofline model toolkit: A practical tool for architectural and program analysis, in: International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems, Springer, 129–148, 2014.
 Ilic et al. (2014) A. Ilic, F. Pratas, L. Sousa, Cacheaware roofline model: Upgrading the loft, IEEE Computer Architecture Letters 13 (1) (2014) 21–24.
 Barba and Yokota (2013) L. A. Barba, R. Yokota, How will the fast multipole method fare in the exascale era, SIAM News 46 (6) (2013) 1–3.
 Lai and Seznec (2013) J. Lai, A. Seznec, Performance upper bound analysis and optimization of SGEMM on Fermi and Kepler GPUs, in: Code Generation and Optimization (CGO), 2013 IEEE/ACM International Symposium on, IEEE, 1–10, 2013.
 Wahib and Maruyama (2014) M. Wahib, N. Maruyama, Scalable kernel fusion for memorybound GPU applications, in: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, IEEE Press, 191–202, 2014.
 Hofmann et al. (2015a) J. Hofmann, J. Eitzinger, D. Fey, Executioncachememory performance model: Introduction and validation, arXiv preprint arXiv:1509.03118 .

Duplyakin et al. (2016)
D. Duplyakin, J. Brown, R. Ricci, Active Learning in Performance Analysis, in: Cluster Computing (CLUSTER), 2016 IEEE International Conference on, IEEE, 182–191, 2016.
 Stengel et al. (2015) H. Stengel, J. Treibig, G. Hager, G. Wellein, Quantifying performance bottlenecks of stencil computations using the executioncachememory model, in: Proceedings of the 29th ACM on International Conference on Supercomputing, ACM, 207–216, 2015.
 Datta et al. (2009) K. Datta, S. Kamil, S. Williams, L. Oliker, J. Shalf, K. Yelick, Optimization and performance modeling of stencil computations on modern microprocessors, SIAM review 51 (1) (2009) 129–159.
 Hammer et al. (2015) J. Hammer, G. Hager, J. Eitzinger, G. Wellein, Automatic loop kernel analysis and performance modeling with KernCraft, in: Proceedings of the 6th International Workshop on Performance Modeling, Benchmarking, and Simulation of High Performance Computing Systems, ACM, 4, 2015.
 Narayanan et al. (2010) S. H. K. Narayanan, B. Norris, P. D. Hovland, Generating performance bounds from source code, in: Parallel Processing Workshops (ICPPW), 2010 39th International Conference on, IEEE, 197–206, 2010.
 Unat et al. (2015) D. Unat, C. Chan, W. Zhang, S. Williams, J. Bachan, J. Bell, J. Shalf, ExaSAT: An exascale codesign tool for performance modeling, The International Journal of High Performance Computing Applications 29 (2) (2015) 209–232.
 Rahman et al. (2011) S. M. F. Rahman, Q. Yi, A. Qasem, Understanding stencil code performance on multicore architectures, in: Proceedings of the 8th ACM International Conference on Computing Frontiers, ACM, 30, 2011.
 Hofmann et al. (2015b) J. Hofmann, D. Fey, M. Riedmann, J. Eitzinger, G. Hager, G. Wellein, Performance analysis of the Kahanenhanced scalar product on current multicore processors, in: International Conference on Parallel Processing and Applied Mathematics, Springer, 63–73, 2015b.
 Andreolli et al. (2015) C. Andreolli, P. Thierry, L. Borges, G. Skinner, C. Yount, Chapter 23  Characterization and Optimization Methodology Applied to Stencil Computations, in: J. Reinders, J. Jeffers (Eds.), High Performance Parallelism Pearls, Morgan Kaufmann, Boston, ISBN 9780128021187, 377 – 396, doi:http://dx.doi.org/10.1016/B9780128021187.000236, URL http://www.sciencedirect.com/science/article/pii/B9780128021187000236, 2015.
 McCalpin (1995) J. D. McCalpin, Memory Bandwidth and Machine Balance in Current High Performance Computers, IEEE Computer Society Technical Committee on Computer Architecture (TCCA) Newsletter (1995) 19–25.
 McCalpin (2007) J. D. McCalpin, STREAM: Sustainable Memory Bandwidth in High Performance Computers, Tech. Rep., University of Virginia, Charlottesville, Virginia, URL http://www.cs.virginia.edu/stream/, a continually updated technical report. http://www.cs.virginia.edu/stream/, 19912007.
 Dongarra (1988) J. Dongarra, The LINPACK Benchmark: An Explanation, in: Proceedings of the 1st International Conference on Supercomputing, SpringerVerlag, London, UK, UK, ISBN 3540189912, 456–474, URL http://dl.acm.org/citation.cfm?id=647970.742568, 1988.
 Liu et al. (2009) W. Liu, K. Bube, L. Zhang, K. Nihei, Stable reversetime migration in variabletilt TI media, in: 71st EAGE Conference and Exhibition incorporating SPE EUROPEC 2009, 2009.
 Sergey et al. (2013) F. Sergey, P. SAva, I. Vlad, Y. Liu, V. Bashkardin, Madagascar: opensource software project for multidimensional data analysis and reproducible computational experiments, 2013.
 Robin M. Weiss (2013) J. S. Robin M. Weiss, Solving 3D anisotropic elastic wave equations on parallel GPU devices., Geophysics 78 (2), doi:http://dx.doi.org/10.1190/geo20120063.1.
 Konstantinidis and Cotronis (2015) E. Konstantinidis, Y. Cotronis, A Practical Performance Model for Compute and Memory Bound GPU Kernels, in: 2015 23rd Euromicro International Conference on Parallel, Distributed, and NetworkBased Processing, ISSN 10666192, 651–658, doi:10.1109/PDP.2015.51, 2015.
 Wu et al. (1996) W. Wu, L. R. Lines, H. Lu, Analysis of higher‐order, finite‐difference schemes in 3D reverse‐time migration, GEOPHYSICS 61 (3) (1996) 845–856, doi:10.1190/1.1444009, URL http://dx.doi.org/10.1190/1.1444009.
 Lines et al. (1999) L. Lines, R. Slawinski, R. Bording, A recipe for stability of finitedifference waveequation computations, GEOPHYSICS 64 (3) (1999) 967–969, doi:10.1190/1.1444605, http://library.seg.org/doi/pdf/10.1190/1.1444605.
Appendix A Waveequations
In the following equations is the pressure field in the case of acoustic anisotropic while are the split wavefields for the anisotropic case. We denote by and respectively the value of for all grid points at time . The physical parameters are the square slowness, the Thomsen parameters and the tilt and azimuth. The main problem with the TTI case is the presence of transient functions (, ) known to be extremely expensive to compute (typically about an order of magnitude more expensive than an add or multiply). Here we will assume these functions are precomputed and come from a lookup table, thus only involving memory traffic In the acoustic anisotropic case the governing equations are:
(8)  
In the anisotropic case we consider the equations describe in (Liu et al., 2009). More advanced formulation have been developed however this equation allow an explicit formulation on the operational intensity and simple stencil expression. It is the formulation we are also using in our code base. In the VTI case the governing equations are:
(9)  
For TTI the governing equations are:
(10)  
where the rotated differential operators are defined as
(11)  
Comments
There are no comments yet.