A new GPU implementation for lattice-Boltzmann simulations on sparse geometries

We describe a high-performance implementation of the lattice Boltzmann method (LBM) for sparse 3D geometries on graphic processors (GPU). The main contribution of this work is a data layout that allows to minimise the number of redundant memory transactions during the propagation step of LBM. We show that by using a uniform mesh of small three-dimensional tiles and a careful data placement it is possible to utilise more than 70 GPU memory bandwidth for D3Q19 lattice and double precision numbers. The performance of our implementation is thoroughly examined and compared with other GPU implementations of LBM. The proposed method performs the best for sparse geometries with good spatial locality.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

08/30/2021

Data-Oriented Language Implementation of Lattice-Boltzmann Method for Dense and Sparse Geometries

The performance of lattice-Boltzmann solver implementations usually depe...
10/22/2021

GPU-Accelerated Forward-Backward algorithm with Application to Lattice-Free MMI

We propose to express the forward-backward algorithm in terms of operati...
04/05/2018

Early Experience on Using Knights Landing Processors for Lattice Boltzmann Applications

The Knights Landing (KNL) is the codename for the latest generation of I...
03/27/2019

Efficient LBM on GPUs for dense moving objects using immersed boundary condition

There exists an increasing interest for using immersed boundary methods ...
10/22/2020

Cross-platform programming model for many-core lattice Boltzmann simulations

We present a novel, hardware-agnostic implementation strategy for lattic...
11/30/2017

Lattice Boltzmann Benchmark Kernels as a Testbed for Performance Analysis

Lattice Boltzmann methods (LBM) are an important part of current computa...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

The lattice Boltzmann method (LBM) is a versatile and highly parallel approach where the discrete Boltzmann transport equation is solved in the velocity or moment

space to obtain the time-dependent fluid velocity distributions. Due to a straightforward LBM parallelization, many researchers have explored the area of its implementations on graphics processors (graphic processing units, GPU). A thorough review is presented in Mawson:Mem2014 . A few typical optimisation techniques may be mentioned: combining all computations into the single kernel to minimise memory bandwidth usage, use of structure of arrays (SoA) instead of array of structures (AoS) data types to avoid uncoalesced memory transactions, an adaptation of all computations to the single precision floating point format to minimise a register pressure and maximise occupancy Valero-Lara:Acc2014 ; Valero-Lara:Acc2015 , use of two copies of data to avoid race conditions, methods decreasing memory usage Valero-Lara:Het2017 ; Valero-Lara:Lev2016 ; Valero-Lara:ANo2015 ; Latt:How2007 , and many others.

However, the above optimisations concern dense geometries. The number of GPU implementations for sparse geometries (with many solid nodes) is much lower and they offer significantly worse performance. This can hinder the usage of GPU LBM implementations in many areas where dense geometries are not applicable. Typical examples may be biomedical or porous media simulations Huang:Mul2014 .

Sparse geometries handling requires some form of storing information about the placement of non-solid nodes. This information can not only increase memory usage, but also generate irregular memory access patterns that can significantly decrease performance.

In general, two indirect addressing solutions are used in GPU implementations of LBM for sparse geometries: the Connectivity Matrix (CM), shown in Bernaschi:AFl2010 with highly optimised version in Huang:Mul2014 , and the Fluid Index Array (FIA), presented in Nita:GPU2013 . The connectivity matrix contains pointers to the neighbour nodes for each lattice direction. The fluid index array is a kind of ”bitmap” that contains for every solid node in geometry or the pointer to non-solid node data. Both solutions bring an overhead in memory usage, additionally the FIA implementation results in a low utilisation of memory bandwidth due to the irregular data access pattern.

LBM implementations for dense geometries often use data structures that allow placing values for neighbour nodes in consecutive memory locations. This results in the linear memory traversing during collision and propagation and a lack of uncoalesced memory transfers, giving the high performance as a consequence. For sparse implementations with indirect addressing, there is no guarantee that neighbour node values will be placed in neighbour memory locations. This may result in an additional memory bandwidth usage caused by uncoalesced memory transactions containing values for unused nodes. These uncoalesced memory transactions, together with transfers of additional pointers to non-solid nodes, results in a significant performance penalty for implementations based on indirect addressing.

The solution proposed in this work allows coping with this disadvantage of implementations for sparse geometries. We present the GPU implementation of LBM where the information about geometry sparsity is stored using the uniform grid of three-dimensional cubic tiles. By partitioning the domain into small tiles the three main goals are achieved. First, we almost completely removed additional data that in indirect addressing are used as pointers to non-solid nodes. Second, due to the fixed tile size, the values for nodes from the tile can be placed in memory in a way that guarantees a minimum amount of additional memory traffic and, as a consequence, the high performance of single tile processing. At the same time, a low memory usage and computational complexity for sparse geometries can be achieved by simply skipping tiles filled with only solid nodes. Thus, the proposed solution allows simulating flows through irregular, sparse geometries with the performance close to efficient LBM implementation for dense geometries.

For the proposed data layout, we show both a detailed theoretical analysis and results of performance measurements for dense and sparse geometries. Additionally, we analyse how the fixed tile size affects the overall performance. Provided that the tiles contain a low number of solid nodes, the performance of our implementation remains high regardless of the geometry sparsity.

The structure of the paper is as follows. In Section 2, we briefly introduce the LBM method and define basic concepts in GPU programming. Section 3 contains the description of our implementation with the detailed analysis of introduced overheads. The performance comparison with existing implementations and a verification procedure are presented in Section 4. Section 5 contains conclusions.

2 Background

2.1 Graphic Processing Unit

Today’s GPUs are in fact mass-produced, easily programmable versatile variant of vector coprocessors. In this work, we use a GPU compatible with the CUDA technology

cuda7.5:2015 . Selected parameters of CUDA GPU architectures are shown in Table 1. It can be seen that the successive generations offer not only a higher memory bandwidth but also their computational power for double precision increases faster than the memory bandwidth.

Architecture Launch Bandwidth DP FLOP/byte
year [GB/s]
Tesla 2008
Fermi 2009
Kepler 2012
Pascal 2016
Broadwell 2016
Table 1: Approximate parameters of CUDA capable GPUs. We consider only models with maximum double precision performance dedicated to the high performance computing. The last row contains parameters of a high-end general purpose CPU.

Since this paper investigates memory bandwidth optimisations, thus we briefly recall characteristics of the memory system in CUDA GPUs. Detailed information can be found in cuda7.5:2015 and in the specific architecture tunning guides distributed with the CUDA platform.

GPU memory hierarchy contains the off-chip DRAM global memory (limited to 1–32 GB depending on GPU), up to two levels of on-chip cache memory (there may be separate caches for read-only access), shared memory and registers. A single transaction with the global memory requires a few hundreds of GPU clock cycles, thus memory transactions for threads from the same warp can be coalesced into one if defined restrictions are met. Additionally, limited software control over a cache usage is available.

Coalescing rules depend on GPU architecture generation. For Fermi class, the single global memory transaction can contain 128 or 32 bytes depending on whether L1 cache memory with 128 byte-wide lines is used. For the first generation Kepler devices (up to GK110 processor), the size of global memory transaction is fixed at 32 bytes (L1 cache is used only for local memory). The second generation Kepler processors (GK110B and GK2xx devices) can optionally use 128-byte global memory transactions. For Maxwell architecture, the transaction size also depends on version. Current Pascal devices use 32-byte global memory transactions regardless whether L1 cache is used or not. Thus, in further considerations, we assume that the single memory transaction contains 32 bytes because only this size is supported by our machine - GTX Titan card with Kepler architecture GK110 processor. Moreover, for irregular memory access patterns, the smaller transactions result in higher bandwidth utilisation due to lower overhead as presented in cuda7.5:2015 .

2.2 Lattice-Boltzmann method

In the LBM, as in the conventional CFD method, the geometry, initial and boundary conditions must be specified to solve the initial value problem. The values of macroscopic quantities can not be directly imposed on boundaries and the initial and boundary values of distribution functions have to be calculated from the known values of the macroscopic quantities after conversion to the lattice unit system using dimensionless numbers (e.g. Reynolds number) Kruger:The2016 . For the inlet of the domain we used Zou-He velocity boundary condition and constant pressure condition for the outlet Zou:OnP1997 . At the walls we applied mixed boundary conditions: velocity 0 and bouce-back Chen:OnB1996 . The computational domain is uniformly partitioned and computational nodes are placed in corners of adjacent cells (voxels), giving the lattice. In this study, the D3Q19 lattice structure is used, where D represents the space dimension and Q the lattice links number. Detailed description of the LB method one can find in recently published books Kruger:The2016 , Guo:Lat2013 or Mohamad:Lat2011 . In this work, we use the lattice directions naming scheme shown in Fig. 1.

Figure 1: A grid composed of 8 unit volumes (voxels). D3Q19 lattice nodes: blue - central node 0, red - streaming nodes (1-18), gray - not directly linked with central node. The convention of lattice directions naming: west (W) direction vector or of length 1 lu (lattice unit) points out from the central node in direction 3.

The transport equation of the lattice-Boltzmann method (LBE) can be written for each direction in the momentum space as follows:

(1)

where

represents the probability distribution function (PDF) along the

lattice direction, and are the lattice spacing and the lattice time step (usually assumed to be equal 1), is the lattice speed (also usually assumed to be equal 1 lu), is the unit vector along the lattice direction, and is the lattice velocity vector in the velocity space along the lattice direction, and is the collision operator.

The collision operator can be approximated with the most popular LBGK Bhatnagar-Gross-Krook model Bhatnagar:AMo1954

(2)

where is the relaxation time in LB units related to the lattice fluid viscosity and is the momentum equilibrium distribution function (EDF) along the lattice direction given by the following formula for the quasi-compressible model of fluid flow Qian:Lat1992 ; Succi:The2001 :

(3)

and for the incompressible model Zou:Aim1995 :

(4)

where is the lattice speed of sound that is a lattice constant: ; is the macroscopic fluid velocity vector expressed in LB units; is a weighting scalar for the lattice direction; is a fluid density expressed in LB units: that is related to pressure expressed in LB units: . The weighting factors of D3Q19 lattice are: for direction vector; for direction vectors; for direction vectors. It can be seen that both EDF (Eqn. (3) and (4)) are very similar except that the density is decoupled from the momentum in Eqn. (4).

The macroscopic velocity for the quasi-compressible model can be determined from Succi:The2001

(5)

and for the incompressible model from He:Lat1997

(6)

Integrating Eqn. (1) from to along the lattice direction and assuming that the collision term is constant during the interval, we can obtain discretized in time form of LBGK equation

(7)

where is a position vector in the velocity space.

The therm on the left hand side is known as the streaming step, the latter represents the collision step. These two steps are repeated sequentially during the simulation giving velocity, density and pressure distributions at each time step.

The LBGK uses the single relaxation time to characterise the collision effects. However, physically, these rates should be different during collision processes. To overcome this limitation, a collision matrix with different eigenvalues or multiple relaxation times can be used. The LBM with an LBMRT collision operator can be expressed as

Lallemand:The2000 :

(8)

where is the collision matrix. The LBMRT model has been attracting more attention recently due to the higher stability than that of the LBGK model. In our study, we use both above mentioned collision models in incompressible and quasi-compressible variants to compare their efficiency.

The hydrodynamic equations derived from the LBE (Eqn. (1)) are the compressible Navier-Stokes equations, but the intrinsic low-Mach-number limit of classical LBE indicates that it is limited to weakly compressible flows only. Therefore, the LBE can be viewed as special artificial compressibility method for incompressible flows, the so-called quasi-compressible model of fluid flow He:Com2002 . The compressible effects may lead to significant errors in case of incompressible fluid flow simulations (for example liquids) or for high Mach number flows of gases. In our simulations, to minimize the compressibility error when the quasi-compressible model is used to simulate liquid flows, values of the relaxation time and the grid density were chosen to fulfill condition: Kruger:The2016 , where is a maximal macroscopic velocity noticed in the lattice. In order to reduce compressible effects in LBE, some modifications of the standard models have been developed for low-Mach-number flows. Zou and coworkers Zou:Aim1995 proposed the incompressible version of standard LBGK model Eqn. (4). In this model the compressible effects are removed totally at steady states, however, for unsteady flows, some errors still exist Guo:Lat2013 .

2.3 LBM performance limits

To compare different implementations, we employ a widely used, simple computational complexity model based on Wellein:OnT2006 that allows for computing the minimum memory usage, the memory bandwidth and the computational performance. The computed complexity is used to show that GPU implementations of LBM are often bandwidth-bound on current hardware. We also use the model to calculate the idealised GPU memory bandwidth utilisation equal the ratio of real performance to the light-speed performance defined in Hager:Int2010

for bandwidth-bound implementations. The computed bandwidth utilisation allows to estimate how close is our implementation to the ”ideal” and to detect redundant operations that reduce performance. Also, according to

Hager:Int2010 , it may be treated as a factor used to compare different LBM implementations (provided that they are bandwidth-bound).

Since we are interested in peak performance, in the considerations that follow below we ignore boundary nodes. Our implementation is based on the pull-approach presented in Valero-Lara:Acc2015 ; Rinaldi:ALa2012 , thus we also assume that the computational unit (CPU or GPU) does computations for a single node in a single LBM iteration (streaming and collision) in three stages: data read from the main memory, computations, data write to the main memory (the data for a single lattice node is buffered in registers and in the internal memory, e.g. shared/cache memory). In our implementation the kernel is responsible for the full update of all geometry data (all values) for a single LBM time step iteration (collision, propagation and boundary computations are fused into a single kernel). Thus, the simplest version of simulation can contain only subsequent launches of the same kernel, of course after transferring geometry data to GPU memory at the beginning.

According to Wellein:OnT2006 , we assume that for each lattice node the only values stored in the main memory are functions. Since the previous work has shown that usually LBM implementations are badnwidth-bound, the other values (e.g. velocity or density) can be computed internally, and dropped after use without transferring them from/to the main memory.

Let denote the number of functions and denote the number of bytes for storing a single value (e.g. 4 bytes for a single precision floating point, 8 bytes for double precision). Thus, the minimum number of memory bytes needed for single node datum is

(9)

Many LBM implementations also use some additional data, but in this model, these data are treated as an overhead. Fortunately, this overhead is almost negligible - usually, the largest part contains an additional field per node used to store information about the type of node (solid, fluid, kind of boundary condition). This information can be efficiently encoded on single bytes (even a few bits for simple cases), which occupy less than 1% of defined by Eqn. (9) for D3Q19 lattice arrangement. Moreover, the node type field can be skipped if nodes are partitioned into sets containing nodes of the same type.

For a single LBM iteration, the new values are computed based on the node type and the previous values. The newly computed values are stored in the memory. Since each value is transferred twice (once read and once stored) then the minimum number of bytes transferred per one LBM iteration for a single node is

(10)

In theory, the value from Eqn. (10) can be further decreased. It was shown in Pohl:Opt2003 that it is possible to do even a few time steps per one data read/write from the main memory by using the loop blocking technique. The general idea is to divide the domain into blocks that can be copied to the processor internal memory (cache and registers) and to do as much as possible computations for the cached nodes. After updating some fragment of the cached block to the time step , the computations for the next time step () can be started for all nodes that have neighbours valid for time . By carefully arranging the order in which nodes are processed, few time steps can be calculated without communication with the main memory. However, according to Wellein:OnT2006 this method was difficult to apply to more complex boundary condition models.

Operation FADD FMUL FMA FSETP FREC # instr. FLOP FLOP/byte
LBGK incompressible 65 21 109 195 304 1,00
LBGK quasi-compressible 65 39 166 21 6 297 463 1,52
LBMRT incompressible 324 40 329 693 1022 3,36
LBMRT quasi-compressible 323 43 386 21 6 780 1165 3,83
incompressible 49 49 49
quasi-compressible 49 15 57 21 6 148 205
FPU division 5 19 7 2 33 52
Table 2: Computational complexity of LBM operations for single fluid node, D3Q19 lattice and double precision values. First four rows show complexity for complete collision and computations. The last three rows shows complexity of separate computations - for quasi-compressible fluid model the computation of requires 3 additional divisions. Separate collision complexity can be calculated by subtracting complexity of computations from values from the first four rows. FMA (fused multiply-add) counts as two floating point operations. FSETP and FREC denote GPU instructions for floating point condition testing and reciprocal computing. FLOP/byte ratio is calculated assuming 304 bytes transferred per node (see Eqn. 10).

The number of floating point operations (FLOP, not to be confused with floating point operations per second, FLOPS) is much more complex to determine. A simple count of the operations resulting from a naive implementation of equations (7) and (8) gives values significantly larger than in real implementations. Actually, many operations described by equations (7) and (8) can be skipped (e.g. multiplications by or ) or used a few times (e.g. partial sums of ). Some of these optimisations can be detected very early (i.e. multiplications by constants equal to ), but others are unpredictable since they are applied during the compilation and their use depends on many factors (optimisation level, registers usage constraints during compilation, machine capabilities - a number of registers etc.). Thus, for numbers of computational operations we show only specific values obtained by a disassembling of a GPU binary code using nvdisasm utility (we count only arithmetic operations). The results are shown in Table 2.

The number of FLOP for our LBM implementation consists of two parts: computation of and collision. The complexity of computations depends only on the fluid model. These computations are implemented as a simple series of additions and optional divisions for the quasi-compressible model. The computational complexity of collision depends practically almost only on collision model. The full cost of the quasi-compressible model is then built-in into computations of , where additional divisions are needed for each velocity component, and into the code responsible for boundary nodes handling (not shown in Table 2). Due to a different ratio of division to collision complexity, the computational complexity of the quasi-compressible model implementation is from 50% for LBGK to 14% for LBMRT higher than for the incompressible model.

The comparison of FLOP/byte ratios from Table 2 with specifications of high-performance GPUs (see Table 1) shows that LBM implementations are usually bandwidth-bound. This is especially true for the newest CUDA capable architectures, for which the FLOP/byte ratio usually gradually increases. The high performance LBM implementations for GPU should then focus on memory bandwidth optimisations. Notice that this is also true for general purpose CPUs, provided that their computational power is fully utilised using vector instructions.

3 Implementation

3.1 Tiling overview

In our implementation, the whole geometry is covered by the uniform mesh of cubic tiles, where each tile contains nodes. If a geometry size is not divisible by , then the geometry is extended with solid nodes. The tiling is implemented by the host code (on CPU side) and done once at the geometry load. We use a very simple tiling algorithm (shown in Algorithm 1): first, the geometry is covered by the uniform mesh of tiles starting at node (0,0,0), and next, the tiles containing only solid nodes are removed. The used data structures are shown in Fig. 2. The algorithm requires information whether each node in geometry is solid or not, thus we use allNodeType array containing integers encoding the type of node (solid, fluid, boundary).

Figure 2: Data structures used for tiling of 2D geometry containing LBM nodes and tiles with nodes. Black/gray squares denote fluid/solid nodes.

For each non-empty tile we store coordinates of its corner in one-dimensional nonEmptyTiles array. Coordinates of tile corner are placed in the order the non-empty tiles are found, thus we loose a connection between a geometric position of tile and index in nonEmptyTiles array. Since during LBM propagation the neighbour tiles must be located, we use an additional dense three-dimensional tileMap matrix with indices to nonEmptyTiles array (or -1 when the corresponding tile contains only solid nodes). After tiling, the arrays nonEmptyTiles and tileMap are transferred to GPU memory.

The time and space complexities of the presented algorithm are linear as a function of the number of nodes (tile size can be treated as a constant). For all geometries used in this work, the construction of presented data structures took less than a second on a single CPU core.

1:for  do
2:     for  do
3:         for  do
4:
5:              for  do check nodes in tile
6:                  for  do
7:                       for  do
8:                           
9:                           
10:                           
11:                           if node not solid then
12:                                addNonEmptyTile
13:                                goto nextTile
14:                           end if
15:                       end for
16:                  end for
17:              end for
18:nextTile:
19:         end for
20:     end for
21:end for
Algorithm 1 General structure of the tiling algorithm for geometry containing nodes and tiles containing nodes. The algorithm requires input array containing node types for each node in geometry.

In general, the tile size could be arbitrary but, as we show below, it can have a significant impact on performance due to a low tile utilisation if a tile is too large. Additionally, the number of nodes within a tile should be a multiple of the GPU warp size to avoid a low hardware utilisation. In our implementation we have chosen .

Fig. 3 shows thread assignment to nodes within a tile. We use thread blocks with dimensions so coordinates of nodes in the tile are simple to calculate: and node coordinates are the same as thread coordinates, coordinate of the node is equal to two the least significant bits of thread coordinate. This mapping gives two warps per tile: the first warp operates on all nodes with , the second warp on nodes with .

Figure 3: Thread assignment to nodes inside a tile for threadIdx.z . Numbers denote linear thread index calculated as threadIdx.x + 4 threadIdx.y + 16 threadIdx.z. Full thread block consists of 4 such planes along axis making 64 threads numbered from 0 to 63.

As in other GPU LBM implementations for sparse geometries Bernaschi:AFl2010 ; Nita:GPU2013 ; Huang:Mul2014 , we use two copies of values (denoted as and ) to avoid race conditions. Thus, the tiles can be processed independently and in any order during a single LBM iteration. An additional research is required for the potential adaptation of techniques from Valero-Lara:Lev2016 ; Latt:How2007 that for dense geometries allow using only a single copy of functions.

Though the tiling is not a very novel technique (it is described even in NVIDIA tutorials as a method improving the performance of dense matrix multiplication, and was used in LBM implementation for IBM Cell processor presented in Sturmer:Flu2009 ), to the best of our knowledge this is the first attempt to use it for an efficient GPU implementation of LBM for sparse geometries. Existing GPU LBM implementations for sparse geometries Bernaschi:AFl2010 ; Nita:GPU2013 ; Huang:Mul2014 focus on indirect addressing, what does not guarantee proper memory layout and requires using additional indices that increase memory and bandwidth usage. Below we will show that due to the regular memory layout the tiling offers a higher performance provided that the tiles contain a low number of solid nodes.

3.2 Tiling memory layout

Since the LBM implementations are usually bandwidth-bound on high-performance GPUs, the proper memory layout resulting in good memory bandwidth utilisation is crucial to achieving high performance. Thanks to the fixed tile size in our implementation we are able to fully control data placement for nodes processed in threads from the same warp. This gives us a possibility to maximise the number of coalesced memory transactions, what results in a very high utilisation of GPU memory bandwidth (comparable to values reported for dense geometries). The presented memory layout is optimised for double precision values.

The general memory layout used in our implementation is shown in Fig. 4. All floating point values for all nodes are stored in a single, continuous allValues array, what allows us to control data placement in GPU memory. In addition to the allValues array, we also use additional data arrays, e.g.: nodeType array where an information about LB nodes is stored, and tileMap and nonEmptyTiles arrays. Data layout in the nodeType array is similar to allValues but for each tile, only one block of data is stored. The tileMap array is a simple three-dimensional array stored in row order.

Figure 4: Memory layout of array with all values () for nodes within tiles. We use two copies of values. For different we use different assignment of node coordinates to position in 64-element data block.

A single value (e.g. ) from all nodes within the same tile forms a data block shown in Fig. 4. Data layout in allValues array guarantees that each data block (64 double precision values) is properly aligned and can be transferred using sixteen 32-byte wide memory transactions. However, since during the propagation step some values must be read from nodes at edges of neighbour tiles, thus the standard row ordered data layout results in uncoalesced/redundant transactions (see for example Fig. 6). To address this issue, for each data block we applied an optimised data arrangement that allows reducing the number of memory transactions, which are not fully utilised (the orange/red transactions in Fig. 6). Each data arrangement is optimised for different access patterns during propagation (see Fig. 5, 6 and 7).

Figure 5: Propagation in north (N) direction for XYZ memory layout and double precision values (8 bytes per value). Numbers denote linear memory offset of value for corresponding node (for example: for node at , is placed at offset 4). Four consecutive double precision values (one row) form a single -byte transaction. During propagation only four 32-byte memory transactions (reads) are required: one from the neighbour tile (read of nodes 12 to 15) and three from the current tile (read of nodes 0 to 11).
Figure 6: Propagation in east (E) direction for two memory layouts and double precision values. For XYZ layout the four 32-byte wide memory transactions (marked with red rectangles) are needed to update nodes from the neighbour tile (nodes with indices 3,7,11,15). YXZ layout results in only one 32-byte transaction for nodes from the neighbour tile, because now these nodes have close indices (12,13,14,15). Additionally, during the propagation inside the tile only three (instead of four) 32-byte read transactions are done for nodes with indices 0 to 11. The fourth transaction is done from the neighbour tile.
Figure 7: Propagation in north-east (NE) direction for zigzagNE memory layout and double precision values. In this memory layout the two consecutive memory locations store values for nodes with the same and coordinates - only coordinate differs. Thus, each square on picture of tile denotes two 8-byte values placed in neighbour memory locations. Partially utilised (uncoalesced) memory transactions (offsets 16 to 19 and 24 to 27) are marked orange. Each orange transaction is done twice and only half of data is used.

Let denote node coordinates inside a tile. In all below considerations we assume that are represented in the natural binary number system (unsigned types in CUDA/C language). The transformation from the node coordinates to the offset inside data block is defined by a linear mapping function . We use three linear mapping functions (all equations are valid for nodes per tile edge):

(11)
(12)

and

(13)

where denotes bitwise AND.

All mapping functions allow to minimise the number of uncoalesced memory transaction for our thread mapping and tile size. In our implementation we have started from the layout for all data blocks and, after code profiling, we have modified layouts for these data blocks, for which the redundant memory traffic was observed. have been used for , have been used for , , , , , , , and have been used for and . For this mapping functions assignment only for four functions () the number of memory transactions is not minimal ( transactions per per tile). Both and require four additional memory transactions per tile (two per warp - marked orange in Fig. 7). Propagation of and requires 32 transactions per tile each, what gives 100% overhead. For and we also tried to use a mapping function similar to , but it resulted in slightly decreased performance despite the lowered number of memory transactions.

The total number of memory transactions required for propagation of a tile is then transactions. This gives a 13% overhead compared with the minimal number of transactions ().

3.2.1 Single precision overheads

For single precision values, the presented data layout also allows to reduce the number of memory transactions, but the significantly larger overhead is observed. The main reason is that only two 32-byte transactions are needed for one layer of values (16 4-byte numbers), thus it is impossible to completely utilise full memory transaction while reading only one row of values from a neighbour tile.

The minimum number of transactions per single function is 8, what gives transaction per complete tile. For XYZ layout this minimal value can be preserved only for three functions - and . Six functions () require 12 transactions per , because for each layer of values the two transactions from current tile and one transaction from neighbour tile are needed (use in Fig. 5 transactions containing two rows instead of one). Next six functions (, , , , , ) require 16 transactions per , similar to Fig. 6 left. The last four functions, , use 24 transactions per . The total number of memory transactions for single precision values is then 288, what gives almost 90% overhead compared to the minimal value.

After using the presented memory layout the numbers of transactions per eight functions (, , , , , , ) reduce to 12 transactions per This allows gathering all values during propagation step using 240 memory transactions. Though this value is 17% smaller than for XYZ layout, it still results in 58% overhead compared to the minimal 152 transactions per tile.

3.3 Tiling overhead

Since tiles have cubic shape and the fixed size, we can minimise the number of uncoalesced memory transactions per tile, as shown in Sec. 3.2. Unfortunately, we always must use full tiles. For this reason, the tile based implementation may need to store data and do computations not only for fluid nodes but also for some neighbour, solid nodes that fall into the tiles with non-solid nodes (see Fig. 9). Compared with the minimal requirements defined in Sec. 2.3, the tiling brings some memory, bandwidth and computational overhead. The three main reasons for the tiling overhead are additional solid nodes inside tiles, two copies of values, and some other data (e.g. array). Furthermore, the data placement can cause an additional bandwidth overhead as shown in Sec. 3.2.

Let the overhead be defined as a ratio of additional memory/operations ( - memory overhead, - bandwidth overhead, - computational overhead) to the minimum numbers defined in Sec. 2.3. Only one LBM time iteration is analysed since all iterations are processed in the same way.

Let denote the number of nodes per tile edge, denote the number of nodes per tile, denote the number of non-empty tiles (tiles with at least one non-solid node), and denote the number of non-solid nodes in the whole geometry. Since we process only non-empty tiles, then the overheads depend on the ratio of to the number of all nodes in all non-empty tiles. For brevity, we define the average tile utilisation factor

(14)

the average number of non-solid nodes per tile , and the average number of solid nodes per tile . Notice that .

When we assume that for each node the number of operations and used memory are minimal (i.e. that the tiling does not have an impact on the computational complexity for single nodes), then the overhead caused by low results only from additional operations/memory, which must be done/allocated for all solid nodes from tile. In this case, the generic overhead value is then

(15)

In our implementation, the real values of bandwidth and computational overheads are even lower because we conditionally skip operations for solid nodes. For computational overhead, some operations are not done when the full warp of threads processes only solid nodes. This situation is rather rare (but possible), as a half of a tile must contain solid nodes. For bandwidth overhead, the transfers are omitted when values for solid nodes form a single 32-byte memory transaction. This happens quite often due to a spatial locality of geometry. Also, some transfers can be skipped, when during the propagation step the neighbour tile contains solid nodes.

For the memory overhead, Eqn. (15) allows only to find the overhead compared to an implementation where the amount of data stored per node is the same as in the tile-based implementation. However, if we use as a reference the value defined by Eqn. (9), then the overhead must take into account the second copy of and additional bytes used in our implementation to encode the type of node (solid, fluid, boundary, thus usually )

(16)

The final approximation is true for . Hence, the memory overhead compared with minimal requirements defined by Eqn. (9) grows about twice as fast as the overhead defined by Eqn. (15).

In addition to Eqn. (15) and (16), the memory and bandwidth overheads are also affected by the and arrays, but they usually may be skipped. For each non-empty tile, values from the array (neighbour tiles) and a coordinates of current tile from are read, and for each tile (both empty and non-empty), a single value must be stored in . Each value in can be at most 4-bytes long because tiles require much more memory, than is available in current GPUs (for D3Q19 lattice, double precision data and tile containing nodes the data for all nodes require about TiB of memory). Coordinates of tile can be also packed into a 4-byte number, for example by using functions similar to defined in Eqn. (11). Single tile requires bytes, what is almost 2500 times larger than the index in . Thus, except for exceptionally sparse geometries (consisting significantly less than 1% of non-empty tiles), the overhead caused by and arrays is negligible.

It can be seen from Eqn. (15) and (16) that the overhead grows quickly when the tile utilisation factor is lower than 1 (the overhead is 0.2 already for ). Therefore, it is critical to achieve the tile utilisation as high as possible.

In Fig. 8 and 10 we show, how changes for tiling of infinitely long channels running along one of the axes. Notice that the tile positions are discrete, thus only a few values of can occur. It can be seen that tile utilisation above can be always achieved for channels with diameter/side at least about 40 nodes. To have a guarantee that the tile utilisation is always above , the channels must have about 100 nodes across. Though these channel dimensions can be too large for some applications, it should be noted that these values are pessimistic. It is possible to find a tile placement that gives good tile utilisation even for very small channels (for example, can be equal to 1 for a square channel as small as nodes).

Additionally, since channels often are not parallel to the axes, then the values of for real cases can be closer to the average value for different tilings of the same channel size (see Fig. 9). The average value for different tilings of the same channel size (the green line in Fig. 8 and 10) exceeds for square channel nodes and circular channel with diameter 30 nodes. For channel dimensions about 55 and 65 nodes for square and circular channels exceeds .

(a)
(b)
(c)
Figure 8: Average tile utilization for all available tilings of infinitely long square channel along the axis. For each channel size sixteen tilings are possible (see Fig. 9). Green line denotes the average value of average tile utilisation for given channel dimension (see Fig. 9). Red crosses denote real tile utilization for channel nodes - there are only 3 available values.
Figure 9: Cross section of 3 example tilings (marked on Fig. 8) of a square channel nodes. Black/grey squares denote fluid/solid nodes. The channel is parallel to one of the axes. Average tile utilisation is , and for green, yellow and red tilling. For this channel and tile size there are 9 tilings with average tile utilisation , 6 tilings similar to yellow tiling and only one green tiling with tile utilisation equal to 1. The average value of average tile utilisation is then .

Some observations can be also extracted from Fig. 8 and 10. It can be seen that the dispersion of the tile utilisation for different tilings of a channel with given dimension is higher for square channels than for circular channels. It is possible to achieve higher tile utilisation for square channels, but it is also possible that the tile utilisation will be much lower than for circular ones.

The average value of different tile utilisations is higher for circular channels. The difference between the tile utilisation for circular and square channels depends greatly on the channel dimension: starting from 3 times for the smallest channels, through the 10% for channels with 20 nodes at diameter/side, and to the less than 2% for channels with at least 100 nodes.

For square channels, the two additional phenomena occur. First, if the channel dimension is larger by one node than the tile edge, then all tilings have the same tile utilisation. In this case, all tilings require the same number of tiles. The one node outside tile borders can always be placed in either ”left” or ”right” side of the tile, never on both sides. When the channel dimension is larger by two or more nodes than the tile edge, the number of additional tiles may differ depending on the tile placement. For example, two additional nodes can be put either in one additional tile or in two additional tiles on both sides. This behaviour can be used during geometry preparation to guarantee constant tile utilisation.

Second, the tile utilisation fluctuates with a period equal to the tile edge nodes, i.e. for channel edge equal to , where and integer . This results from the fact that for a single period (for the same ) the same numbers of tiles are needed: , or tiles along each axis. Since the number of non-solid nodes grows with , then for the same number of tiles the tile utilisation becomes larger. As the lowest tile utilisation is always for , square channel side should be different from

(a)
(b)
Figure 10: Average tile utilisation for all available tilings of infinitely long circular channel along the axis (16 different tilings are possible for each diameter). Green line denotes the average value of average tile utilisation for given channel size (see Fig. 9). Red crosses denote real tile utilization for channel with 8 nodes diameter - in this case only four possible values can occur.

3.4 Implementation details

Our code is written in CUDA C language ver. 7.5, the first version with official support for many C++11 features. Thanks to this we could write a highly templated code, which can be shared between the CPU and GPU compilers. The code sharing enabled us to simplify testing because results of many parts of the code could be thoroughly checked only on the CPU side. Additionally, extensive use of C++ templates allowed us to write a generic kernel code, which later can be specialised for fluid and collision models, data layouts, data precision, enabled optimisations etc. We were then able to test many combinations of the above parameters with minimal changes to source code and without code duplication. The disadvantages of this solution are a long compilation time and a large size of the executable file containing many versions of the same kernel.

In our implementation, the computations for a single LBM time step include collision, propagation and boundary computations for all nodes within the tiles. To minimise memory transfers, we combine collision, propagation and boundary computations for a single node into a single GPU kernel. The kernel requires only one read of data from memory at the beginning, and one store of data at the kernel end. Local copies of data are stored in registers and in the shared memory. Separate tiles are processed by a separate GPU thread blocks (see Fig. 3), what allows for effortless synchronisation of computations within the tile. The general structure of our GPU kernel is shown in Algorithm 2.

1:load copy of values
2:load node types from current tile values
3:load node types from neighbour tiles WLP
4:BARRIER
5:if node not solid then
6:     load
7:     for  do
8:         compute address of neighbour node in direction
9:         if neighbour node not solid then
10:              gather from neighbour node
11:         end if
12:     end for all copied to registers
13:     process boundary calculate
14:     collide
15:     store all all coalesced
16:end if
Algorithm 2 Structure of the GPU kernel implementing single LBM time iteration for a single node.

The propagation is implemented as a gathering data from neighbour nodes. The propagation between two nodes is done only when the target and source nodes are not solid. To minimise a cost of checking node types, we use shared memory to store copies of node types from current and neighbour tiles (lines 2–3 in Algorithm 2). In code responsible for loading node types from neighbour tiles (line 3) we use warp level programming to balance the load of warps assigned to the tile. We also make a copy of tile indices from the array forming a cube surrounding currently processed tile (line 1). All later operations requiring information about node and/or tile types use these copies.

Figure 11: Steps required for making a local copy of array for 2D geometries and tiles containing nodes. The geometry is identical as in Fig. 2.

The operations required to make a local copy of are shown in Fig. 11. For simplification, we reduced the geometry to two-dimensional. The local copy is stored in shared memory and used by all threads processing the tile. First, the tile index is computed based on the index of current CUDA block (red rectangle in Fig. 11). When the tile index is known, the coordinates of tile corner can be loaded from array. Then the corner coordinates are transformed into indices of an element in the global array (the reverse operation to presented in Fig. 2). In the last step, the local copy is filled with the fragment of array containing elements and centred at the element with tile corner coordinates. The resulting local copy of contains indices to all tiles surrounding the currently processed one.

Figure 12: Steps required for making a local copy of array for D2Q5 lattice arrangement and tiles containing nodes. The geometry is identical as in Fig. 2.

The process of making local copy of array is presented in Fig. 12. Since the indices of all neighbour tiles are already cached in the local copy, the only operations on global memory use the array containing the node type fields for nodes from non-empty tiles. Two steps are needed for each neighbour tile: first, the address of data block containing node types for the tile is computed, then the required node type values are copied to shared memory. Notice that the local copy is always 2 nodes per side larger than the tile to allow for storing node types from neighbour tiles. Also, only these neighbour tiles are visited that result from lattice linkage.

After loading of node and tile types, the barrier is needed (line 4 in Algorithm 2) because in a later part of the kernel the threads from different warps require the data gathered in other warps. This is the only synchronisation point in the kernel.

All later operations are done only when a node assigned to current thread is not solid (line 5). In this way, we can skip unnecessary operations as mentioned in Sec. 3.3.

The computations for a non-solid node are divided into the three parts: lines 6–12 are responsible for gathering values, lines 13 and 14 perform all floating point calculations and line 15 stores the computed values to the memory. All operations are implemented as standard except for the line 8, where we compute indices for a neighbour node. First, the index of a tile containing the neighbour node is computed - we use only the values from due to the local copy of the array. When the neighbour tile is not empty, a few indices for the neighbour node are computed since we need both information about the neighbour node type (this is gathered from the local copy of array stored in shared memory) and the value of proper (which is stored in global memory in array). Notice that for different functions we need to use the different data layouts (see Sec. 3.2).

In the GPU kernel, we have also applied some optimisations not shown in Algorithm 2. First, the loop starting at line 7 is partially unrolled to allow computations for two functions in a single iteration, what increases the instruction level parallelism due to the interlacing of two independent streams of instructions. In addition, the order of iterations over is chosen in a way that allows sharing some parts of neighbour node address computations (line 8). Next, some values (e.g. ) are stored in shared memory to minimise register pressure. Shared memory is also used in the implementation of the bounce back boundary condition to avoid a significant increase of required registers. To minimise conflicts in shared memory, 64-bit mode is used. We have also tuned the usage of registers for each specialisation of the kernel since different collision models require different amounts of temporary data. The cost of divergence in the code responsible for computations of an address of the neighbour node is minimised - only integer indices are computed in divergent branches, time-consuming operations like memory accesses are done in all threads simultaneously. In places, where it is possible and useful, the number of costly divisions is reduced by replacing them with multiplications by the inverse. Finally, we have been intensively using inline methods and compiler pragmas to force loop unrolling, what allowed us to connect clean, structured code with high performance.

4 Results

4.1 Measurements methodology

All tests have been run on a computer with the Intel i7-3930K CPU with 64 GB quad-channel DDR3 DRAM and GTX Titan GPU (Kepler architecture) clocked at 823 MHz with 6 GB GDDR5 memory clocked at 3.004 GHz. The peak theoretical GPU memory bandwidth is GB/s (GB/s denotes bytes/s), the NVIDIA bandwidthTest utility reported GB/s (79.2% of peak). We have been using Linux operating system with NVIDIA CUDA Toolkit 7.5 installed.

To identify performance limits we have been using three versions of kernels: kernels implementing all LBM operations, kernel with full propagation but without computations, and kernel without propagation, which only reads and writes node data for the same node (without communication between neighbour nodes). The kernels implementing all LBM operations are described by collision and fluid models (e.g. ”LBGK incompressible”), the kernel with propagation is marked as ”propagation only” and the last kernel without full propagation is denoted as ”read/write only”. All kernels were prepared in two versions operating on double and single precision floating point numbers.

As a measure of kernel performance, we have been using the number of non-solid (fluid and boundary) nodes processed per second (MFLUPS) Valero-Lara:Lev2016 . The kernel performance was measured on the final version of the code with the main computational loop containing not only the launches of optimised kernel, but also the conditional stores of results to disk and the launches of the modified kernels used to estimate the convergence. Since the impact of the initial data preparation on the simulation performance strongly depends on case configuration (for example, on the number of required iterations), we focused only on the performance of computational kernel repeated iteratively. Moreover, for cases used in the paper the full simulation requires up to more than LBM iterations, then the initial data preparation can be safely skipped. The same applies to additional operations inside the computational loop (data writes to disk or diagnostic messages) that are costly compared to single kernel run and should be done only when necessary. Thus, to take into account only the time spent in the profiled kernel we measured the wall clock time for each kernel launch separately and with necessary synchronisation at kernel end. For time measurement we have been using the C++ std::chrono library with microsecond resolution.

The overhead of this method was negligible in our system - the difference between the sum of measured separate stages of the computational loop and the total wall clock time spent in the whole computational loop was less than one microsecond per single kernel launch. The single kernel took from about 30 microseconds (for nodes) up to tens of milliseconds (for the largest geometries). Each version of the optimised kernel was run 100 times in succession. Sample results are shown in Fig. 13. The observed difference between the shortest and the longest measured time spent in separate kernel launches was 78 s for data from Fig. 13. For the smallest geometries, the difference was less than 15 s.

(a)
(b)
Figure 13: Performance of subsequent kernel launches (top) and histogram of kernel performance (bottom) for double precision LBGK incompressible flow simulation in cavity3D geometry containing nodes. The blue dashed line denotes the average kernel performance.

4.2 Thread mapping

The thread mapping described in Sec. 3.1 allows assigning an arbitrary number of tiles per thread block. This enables to increase the GPU occupancy on machines, where the maximum number of simultaneously residing blocks is small. For example, GK110 GPU allows for up 16 blocks residing on a multiprocessor, what for 64 threads per block (single tile contains 64 nodes) results in 50% occupancy. However, the increase of occupancy by assigning more threads per block is possible only when threads use a small number of registers. Since our double precision kernels require at least 64 registers per thread, the maximum occupancy can not exceed 50% for GK110 GPU. Thus, we used larger blocks (containing 128 threads that process two tiles) only for single precision kernels, which achieved the highest performance for 40 or 48 registers per thread. The disadvantage of the larger blocks is more complex computations in the kernel because calculations of indices for internal data structures are slightly more complex than for single tile per block.

4.3 Performance for cavity3D

(a)
(b)
Figure 14: Kernel performance for double precision kernels and cavity3D with geometry size nodes, where . All measures on GTX Titan.

To identify performance limits not affected by the geometry sparsity, we have first measured the performance of our implementation for the cavity3D test case, a standard example of dense geometry. The performance of all kernels as a function of a geometry size is shown in Fig. 14.

The performance dependency on the geometry size is similar to many other GPU implementations. For nodes the kernels achieve about 85% of their maximum performance. To achieve more than 95% of maximum performance, at least nodes are needed. For other GPUs, the numbers of nodes can be different, but for today’s machines, the maximum performance should always be possible to achieve for geometries containing about or more nodes.

The highest performance was observed for read/write only kernel: 771 MFLUPS what, according to Eqn. (10), corresponds to memory bandwidth equal to GB/s (81.3% of maximal theoretical bandwidth for GTX Titan). Because in this kernel all memory transactions are coalesced, we achieved bandwidth utilisation even higher than reported by the utility.

For kernel with propagation only, the performance dropped by 9.7 % to 696 MFLUPS resulting in bandwidth 211.6 GB/s (73.4% of maximum). It should be noted that though the propagation only kernel has more than seven times more instructions than read/write only kernel (711 vs 99 instructions), the performance penalty caused by neighbour tile data access during propagation is smaller than 10%. More than 60% of all instructions in propagation only kernel are integer arithmetic operations used for address computations.

Finally, after enabling all computations the kernel performance dropped by an additional 2.3% for LBGK incompressible (680 MFLUPS, 71.7% of GPU memory bandwidth) up to 34.8% for LBMRT quasi-compressible (454 MFLUPS, 47.9% of GPU memory bandwidth). The large performance drop for LBMRT results from computational overhead - due to the large register usage per single thread and the low GPU occupancy the complex double precision computations are not masked with memory transfers.

The performance of our implementation for sparse geometries is close even to the highly optimised version for dense geometries from Mawson:Mem2014 , which utilises % of the maximum memory bandwidth for Tesla K20c and the LBGK quasi-compressible model. Moreover, the implementation from Mawson:Mem2014 uses single precision numbers what allows to increase GPU utilisation due to much lower register pressure. This result shows that it is possible to process sparse geometries with performance similar to dense ones.

As can be seen in Fig. 14, the performance of all kernel versions fluctuates for geometry edge , where . This results from the low tile utilisation for tiles placed on the geometry edges (see Section 3.3). Notice also that a performance difference for the geometry edge computed for the same and different decreases for larger , e.g. for the performance of propagation only kernel is between 677 and 696 MFLUPS, whereas for between 641 and 651 MFLUPS. This results primarily from differences in the average tile utilisation caused by partially filled tiles at geometry boundaries for . For the cubic cavity3D geometry the number of fully utilised tiles inside the geometry grows proportionally to and the number of partially utilised tiles (at boundary walls) proportionally to . This causes the growth of the average tile utilisation for larger .

Some interesting behaviour is also shown in Fig. 14 - after achieving peak about nodes the performance slowly decreases with a geometry size increase. The intensity of this phenomenon depends on a computational complexity of operation - the most visible is for the propagation only kernel (see Table 3). Also, it does not occur for the kernel without propagation (read/write only). This is a result of data propagation between neighbour tiles that increases the time needed for tile processing – for read/write only kernel there is no propagation between tiles, thus no performance decrease is observed. For smaller geometries the ratio of tile faces and edges common to two tiles to the number of tiles is smaller than for large geometries, thus the performance for smaller geometries is higher.

Operation Max perf. Max case
read/write only 771 220 771
propagation only 696 112 650
LBGK incompr. 680 104 632
LBGK q-compr. 613 100 545
LBMRT incompr. 468 148 462
LBMRT q-compr. 454 152 451
Table 3: Kernel performance in MFLUPS for dense geometry cavity3D and double precision kernels. Max perf. denotes the performance for a case with the highest performance, is the dimension of the case with the highest performance (number of nodes is ), Max case denotes the performance for the largest case ( nodes).

4.3.1 Performance for single precision

Although the proposed data layout is optimised for double precision representations, we have also analysed its performance for single precision version. We have measured the performance of six available operations: four LBM kernels and two simplified ones (propagation only and read/write only). To show an impact of the proposed data layout on the performance we compared the achieved performance to the performance for the standard XYZ layout. Additionally, for each kernel, we analysed versions with two different numbers of threads per CUDA block: 64 and 128 threads. Finally, for each combination of the parameters mentioned above, we checked two different limits of registers per thread: 40 and 48 registers resulting in theoretical occupancy 0.75 and 0.63. The results for dense geometry cavity3D containing nodes are presented in Table 4.

(a)
(b)
Figure 15: Kernel performance for single precision kernels and cavity3D with geometry size nodes, where . All measures for XYZ data layout and two tiles per thread block.
Data layout optimised: XYZ + YXZ + zigzagNE XYZ
Tiles per block 2 tiles 1 tile 2 tiles 1 tile
Registers per thread 48 40 48 40 48 40 48 40
read/write only 1501 1502 1497 1498 1503 1504 1497 1498
propagation only 1133 1128 1078 1050 1067 1066 1037 1024
LBGK incompressible 974 1003 958 920 963 1006 962 948
LBGK quasi-compressible 968 998 958 916 962 999 959 945
LBMRT incompressible 785 793 792 777 810 823 805 799
LBMRT quasi-compressible 783 777 794 779 821 803 806 800
Table 4: Performance in MFLUPS for cavity3D geometry containing nodes and different versions of single precision kernels.

The presented data show, that the performance of the read/write only kernels is practically the same in all cases and corresponds to bandwidth utilisation . For the propagation only kernel, the proposed data layout gives about 6% higher performance than for XYZ due to reduced number of memory transactions as shown in Sec. 3.2.1. However, when the kernels become more complex, the XYZ layout results in higher performance. For relatively simple LBGK kernels, the performance for both data layouts is almost the same, but for more complex LBMRT versions the simpler data layout (XYZ) gives up to 5% better performance. These results suggest that the computational cost of index calculation for the proposed data layout is not negligible for complex computational models, especially that in the Kepler architecture the same hardware is shared between fixed point and single precision floating point computations Nvidia:Kep2012 . Thus, the results for single precision computations later in this article were obtained for kernels with XYZ data layout and 2 tiles per thread block because these parameters result in the highest performance.

The performance of all single precision kernels as a function of a geometry size is shown in Fig. 15. Two main differences may be observed compared to double precision versions shown in Fig. 14: much lower bandwidth utilisation for all but the read/write only kernel, and a smaller decrease of performance for increasing geometry size. Notice also that the performance of both LBGK kernels is similar to the performance of the propagation only kernel. This shows a small impact of additional computations in single precision LBGK versions, especially additional divisions in quasi-compressible version are much less visible than in double precision (see Fig. 14).

4.4 Propagation performance

To estimate how the communication between tiles affects the performance we have also measured the performance of propagation for a set of rectangular channels containing about nodes. The channels differ in dimensions: we have used channels starting from nodes up to nodes (1873 combinations in total). This way allowed us to generate geometries with a different number of faces and edges common to neighbour tiles.

Let denote the number of common faces per tile and denote the number of common edges per tile. The values of and are computed based on the geometric layout, e.g. for four tiles arranged in mesh there are 4 common faces and 1 common edge, thus and For channel dimensions used in our measurements the values of and are the following: channel had about one common face per tile and zero edges, channel had about common face per tile and about common edge per tile, and so forth. We do not count the numbers of transferred faces/edges because for rectangular channels each face is always common between two tiles and each edge between four, thus the number of data transfers is proportional to and .

Figure 16: Performance of propagation as a function of a number of common faces and edges per tile. The propagation performance is shown as a fraction of the full GPU memory bandwidth. The measurements have been done for rectangular channels containing nodes but with different dimensions.

The propagation performance as a function of common faces per tile and common edges per tile ratios is shown in Fig. 16. The highest performance (851 MFLUPS, 90% GPU memory bandwidth utilisation) was observed for channel nodes.

Data layout Performance BW utilisation L2 reads Memory reads Cache hit rate Instructions
XYZ 586 673
XYZ + zigzagNE 606 701
XYZ + YXZ 666 681
all three 698 711
rw only 768
Table 5: Propagation performance in MFLUPS for different data layouts inside tile. All values for cavity3D with nodes ( tiles) and double precision values.

Most of the points shown in Fig. 16 is placed close to a plane parallel to the ”” axis and defined by a line

(17)

which can be treated as a coarse estimation of a ratio of common edges to faces. Only for small (), which corresponds to very narrow channels of dimension tiles, the line defining the plane becomes

(18)

The bandwidth utilisation shown in Fig. 16 can be estimated as

(19)

Two main areas can be identified in the dependence shown in Eqn. (19). For and the propagation performance falls proportionally to and . Also, the is almost twice as important as . For and the propagation performance increases in inverse proportion to and . In this range is more important than , what means that the more square channels (with width similar to depth) allow to achieve a higher performance. The highest bandwidth utilisation in this range was observed for channel nodes (73%, 696 MFLUPS).

4.5 Data layout impact

To measure the performance gains caused by tile memory layouts presented in Sec. 3.2 we have used nvprof utility to profile the four propagation kernels with memory layouts XYZ, XYZ + zigzagNE, XYZ + YXZ, and all three (XYZ + YXZ + zigzagNE). The assignment of memory layouts to data blocks is defined in Sec. 3.2. For cases XYZ + zigzagNE and XYZ + YXZ the default memory layout is XYZ. The results are shown in Table 5.

As a test case, we have used the geometry containing nodes due to its regular structure that allows to exactly compute the number of transactions. The minimal number of write transactions is 32-byte writes. The minimal number of 32-byte reads is plus transactions for node type reads: 32-byte transactions. The total number of memory transactions is then .

The measured numbers of 32-byte memory writes were independent of the used memory layout. For all layouts, we have observed 32-byte writes, which is the minimal value. This is a result of the gather pattern, where irregular memory accesses to neighbour nodes/tiles occur only during the read stage.

The data from Table 5 show that the proposed memory layout (”all three” row) allows achieving the number of memory transactions (both read and write) only larger than the minimal. This is a result of both a reduction of uncoalesced memory reads (L2 reads column) and an increase of cache hit rate. Notice also that for ”rw only” kernel the numbers of cache and device memory transactions is minimal, what shows the correctness of our theoretical estimations of numbers of memory transactions.

Since the YXZ layout is applied to the largest number of data blocks (eight), its use results in the largest performance increase (% compared with 586 MFLUPS for XYZ layout). The zigzagNE layout is used only for two data blocks and gives exactly four times smaller performance increase (%). Notice that the performance increase does not depend on address computation complexity. Though zigzagNE layout requires times more additional instructions per data block than YXZ layout (14 compared with 1 instruction per data block), the performance increase scales linearly only with the number of used data blocks regardless of the address computation complexity.

The reductions of the numbers of transactions (for both cache and device memories) for the zigzagNE and YXZ layouts add up: and . This means that despite the parallel execution of kernels for and shared L2 cache and device memories there is no visible dependence between accesses to different data blocks.

4.6 Performance for sparse geometries

Geometry porosity 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1
Tile utilisation 0.970 0.936 0.899 0.857 0.813 0.762 0.699 0.626 0.512
Kernel double precision
read/write only 761 751 740 726 711 693 668 634 569
propagation only 672 667 659 650 638 621 597 560 491
LBGK incompressible 649 645 638 628 614 594 563 520 446
LBGK quasi-compressible 558 558 565 564 555 532 499 459 393
LBMRT incompressible 450 436 422 404 387 366 343 317 275
LBMRT quasi-compressible 436 422 407 390 373 354 332 308 270
Huang:Mul2014 337 330 334
single precision
read/write only 1503 1474 1443 1406 1364 1312 1245 1151 959
propagation only 1053 1034 1013 989 961 927 880 815 706
LBGK incompressible 1008 982 954 919 880 833 775 706 589
LBGK quasi-compressible 1005 976 948 912 873 826 767 700 585
LBMRT incompressible 815 790 766 734 703 667 624 574 496
LBMRT quasi-compressible 819 794 766 733 701 663 620 567 484
Huang:Mul2014 559 533 554
Table 6: Kernels performance in MFLUPS for arrays of random arranged spheres (similar to Huang:Mul2014 ) with porosities between 0.1 and 0.9. Geometry size is nodes. Results from Huang:Mul2014 are for Tesla K20 GPU and kernel similar to our LBGK incompressible.

To verify the performance of our implementation for sparse geometries we have run simulations for cases similar to presented in Huang:Mul2014 and Nita:GPU2013 . In Table 6 there are shown results for nine arrays of random arranged spheres with a diameter of 40 lattice units. The porosity is defined as a ratio of the number of non-solid nodes to the total number of nodes in the bounding box volume (equal to for cases from Table 6). For the presented cases the number of non-solid nodes varies from 701757 (for porosity ) up to 6381555 (for porosity ). The comparison of bandwidth utilisation is shown in Table 7. Our solution achieves the higher bandwidth utilisation for all geometries with porosities reported in Huang:Mul2014 . Notice also that the double precision version of the tile based implementation keeps bandwidth utilisation high even for very low porosities. For porosity 0.2 the performance 520 MFLUPS corresponds to bandwidth utilisation 0.548, which is higher than results from Huang:Mul2014 for much higher porosities (0.7 to 0.9).

Porosity 0.9 0.8 0.7
double precision this 0.684 0.680 0.673
Huang:Mul2014 0.493 0.482 0.488
single precision this 0.531 0.518 0.503
Huang:Mul2014 0.409 0.390 0.405
Table 7: Comparison of bandwidth utilisation with implementation from Huang:Mul2014 . Results from Huang:Mul2014 are for Tesla K20 GPU, kernel similar to our LBGK incompressible, and random arranged spheres geometry.
Figure 17: Flow pattern within the cerebral aneurysm model.

The second test was the blood flow in a cerebral aneurysm (see Fig. 17). We used the model similar to presented in Huang:Mul2014 , but since our implementation does not support multi-GPU configurations, we reduced the resolution to nodes ( nodes in total - the geometry was extended due to tile size, non-solid nodes, geometry porosity , average tile utilisation ). The results are shown in Table 8.

For the cerebral aneurysm case, our implementation achieves significantly higher GPU memory bandwidth utilisation than Huang:Mul2014 . This is due to the high tile utilisation resulting from a good spatial locality of geometry. Notice also that the numbers from Table 8 are close to the values achieved for dense geometries (the right column in Table 3). It shows that when the tile utilisation is high, then the performance of our implementation is almost not dependent on geometry sparsity.

Kernel MFLUPS BW utilisation
double precision
read/write only 753
propagation only 654
LBGK incompr. 637 0.671
LBGK q-compr. 553
LBMRT incompr. 439
LBMRT q-compr. 422
single precision
read/write only 1480
propagation only 1021
LBGK incompr. 971 0.512
LBGK q-compr. 968
LBMRT incompr. 789
LBMRT q-compr. 790
Huang:Mul2014 1090 0.404
Table 8: Kernels performance for cerebral aneurysm model similar to Huang:Mul2014 . Results from Huang:Mul2014 are for single precision floating point numbers, four Tesla C1060 GPUs and kernel similar to our LBGK incompressible.
Figure 18: Flow pattern within the model of aorta with coarctation.

Finally, we have measured the performance for aorta with coarctation case (see Fig. 18) similar to presented in Nita:GPU2013 . The performance comparison is shown in Table 9. Since this geometry is rather small (resolution , about non-solid nodes, porosity , average tile utilisation ), the performance of our implementation is up to 14% lower than for the cerebral aneurysm case shown in Table 8. Small geometry size degrades performance not only due to the low number of nodes (as shown in Fig. 14) but primarily due to the reduced tile utilisation (equal to in this case). Despite this performance degradation, the tile based solution achieves significantly higher memory bandwidth utilisation than implementation presented in Nita:GPU2013 .

Kernel MFLUPS BW utilisation
double precision
read/write only
propagation only
LBGK incompr.
LBGK q-compr.
LBMRT incompr.
LBMRT q-compr.
Nita:GPU2013
single precision
read/write only
propagation only
LBGK incompr.
LBGK q-compr.
LBMRT incompr.
LBMRT q-compr.
Table 9: Kernels performance for aorta model with coarctation (similar to Nita:GPU2013 ). Results from Nita:GPU2013 are for GTX 680 GPU, D3Q15 lattice and kernel similar to our LBGK quasi-compressible.

4.6.1 Tile utilisation impact

To show that the performance for sparse geometries depends only on tile utilisation, not on the geometry porosity, we have analysed how the performance scales with . The results are shown in Fig. 19. Performance of all kernels was scaled by the maximum performance observed for the dense cavity3D geometry. The performance of double precision kernels was divided by the maximum values from Table 3, the single precision performance was scaled by bold values from ”XYZ 2 tiles” column shown in Table 4. For example, if double (single) precision performance of LBGK incompressible kernel for cerebral aneurysm geometry is 637 (971) MFLUPS (Table 8) then the scaled performance shown in Fig. 19 equals to and .

(a)
(b)
Figure 19: Normalized performance for all single (top) and double (bottom) precision kernels as a function of the average tile utilisation for all sparse geometries. Markers are the same as in Fig. 14 and 15.

The graphs presented in Fig. 19 show that the performance for sparse geometries depends almost linearly on average tile utilisation (for kernels with the smallest amount of computations the dependence slightly differs from the linear one). Only a few LBGK simulations for double precision and high tile utilisation drop below lines . The normalized performance drops proportionally to with a ratio . For kernels with high computational cost (LBMRT kernels for double precision and all LBM kernels for single precision, as shown in Sec. 4.3.1) the proportionality constant equals to . When kernels contain a small number of computations, the performance decreases more slowly (). The slowest performance drop with was observed for the read/write only kernel.

Figure 20: Average tile utilisation as a function of the geometry porosity for all sparse geometries.

We have also analysed how the average tile utilisation factor behaves for analysed sparse geometries. Fig. 20 shows and geometry porosity for random arranged spheres, cerebral aneurysm and aorta with coarctation geometries. Though for random arranged spheres the value of is strongly decreasing with geometry porosity, it can not be treated as a general rule. Notice that for both blood flow geometries with very low porosity the values of stay high. In practice, we are not able to give a simplified method, thus the tile utilisation must be calculated for each geometry separately. However, since the tiling of any geometry can be done very efficiently (see Sec. 3.1), this is not a serious disadvantage.

5 Conclusions

In this paper, the GPU based LBM implementation for fluid flows in sparse geometries is presented. In contrary to the previous solutions for sparse geometries, which use the indirect node addressing, our implementation covers the geometry with the uniform mesh of cubic tiles. This method allows to almost completely remove additional data (only a very small amount of data per tile must be used). Due to the fixed tile size, we have been able to carefully arrange data inside a tile what allowed to additionally decrease the GPU memory traffic. An important advantage of the proposed solution is that the performance depends only on the average tile utilisation, not on the real geometry porosity. Since the average tile utilisation can be maintained at high levels even for very sparse geometries, especially these with good spatial locality (for two presented blood flow geometries with porosity 0.175 and 0.094 the average tile utilisation was 0.931 and 0.807), then the proposed data layout allows to simulate flow through sparse geometries with performance close to the recent implementations for dense geometries.

Our implementation allowed us to achieve up to 680 MFLUPS (% utilisation of maximum theoretical memory bandwidth) on GTX Titan for D3Q19 lattice, LBGK incompressible model and double precision data. We have also examined the performance for both the LBGK and LBMRT collision models in incompressible and quasi-compressible versions. The performance comparison with existing implementations shows that in all real cases our solution allows to achieve significantly higher performance. Especially promising results were obtained for the large geometry with good spatial locality (cerebral aneurysm model with non-solid nodes and porosity 0.175), where for our double precision kernel we observed almost 94% of peak performance achieved for dense geometry. This corresponds to higher memory bandwidth utilisation than the highly optimised, single precision implementation based on indirect node addressing. Even for the very sparse geometries with weak spatial locality (array of random arranged spheres with porosity ), our double precision implementation has higher bandwidth utilisation than indirect node addressing version for almost dense geometry with porosity 0.9.

For single precision computations, the presented tile based solution offers lower performance gains than for double precision due to lower bandwidth utilisation caused by uncoalesced memory transactions. Also, the presented data layout optimised for double precision values offers no advantages for single precision compared to the standard, row ordered storage of data inside tiles. Nonetheless, for all compared geometries the performance of tile based implementation for single precision was also higher than for indirect addressing based ones.

Future work includes the extension to a multi-GPU version to increase the size of supported geometries and the search for a method to increase the tile utilisation (e.g. by a non-uniform tile placement and/or a decrease of the tile size). We are also going to find memory layouts better suited for single precision values.

Acknowledgments

The authors are very grateful to the reviewer’s valuable comments that improved the manuscript. This work was supported by Wrocław University of Science and Technology, Faculty of Electronics, Chair of Computer Engineering statutory funds.

References

  • (1) M. J. Mawson, A. J. Revell, Memory transfer optimization for a lattice Boltzmann solver on Kepler architecture nVidia GPUs, Computer Physics Communications 185 (10) (2014) 2566–2574.
  • (2) P. Valero-Lara, A. Pinelli, M. Prieto-Matias, Accelerating solid-fluid interaction using lattice-Boltzmann and immersed boundary coupled simulations on heterogeneous platforms, Procedia Computer Science 29 (2014) 50–61.
  • (3) P. Valero-Lara, F. D. Igual, M. Prieto-Matías, A. Pinelli, J. Favier, Accelerating fluid–solid simulations (lattice-Boltzmann & immersed-boundary) on heterogeneous architectures, Journal of Computational Science 10 (2015) 249–261.
  • (4) P. Valero-Lara, J. Jansson, Heterogeneous CPU+GPU approaches for mesh refinement over lattice-Boltzmann simulations, Concurrency and Computation: Practice and Experience 29 (7) (2017) e3919–n/a, e3919 cpe.3919.
  • (5) P. Valero-Lara, Leveraging the performance of LBMHPC for large sizes on GPUs using ghost cells, Springer International Publishing, Cham, 2016, pp. 417–430.
  • (6) P. Valero-Lara, J. Jansson, A non-uniform staggered cartesian grid approach for lattice-Boltzmann method, Procedia Computer Science 51 (2015) 296–305.
  • (7) J. Latt, Technical report: How to implement your DdQq dynamics with only q variables per node (instead of 2q), Tech. rep., Tufts University, Medford, USA (2007).
  • (8) C. Huang, B. Shi, Z. Guo, Z. Chai, Multi-GPU based lattice Boltzmann method for hemodynamic simulation in patient-specific cerebral aneurysm, Communications in Computational Physics 17 (2015) 960–974.
  • (9) M. Bernaschi, M. Fatica, S. Melchionna, S. Succi, E. Kaxiras, A flexible high-performance lattice Boltzmann GPU code for the simulations of fluid flows in complex geometries, Concurr. Comput. : Pract. Exper. 22 (1) (2010) 1–14.
  • (10) C. Nita, L. Itu, C. Suciu, GPU accelerated blood flow computation using the lattice Boltzmann method, in: High Performance Extreme Computing Conference (HPEC), 2013 IEEE, 2013, pp. 1–6.
  • (11) NVIDIA Corporation, CUDA C Programming Guide PG-02829-001_v7.5, 2015.
  • (12) T. Krüger, H. Kusumaatmaja, A. Kuzmin, O. Shardt, G. Silva, E. Viggen, The Lattice Boltzmann Method: Principles and Practice, Graduate Texts in Physics, Springer, 2016.
  • (13) Q. Zou, X. He, On pressure and velocity boundary conditions for the lattice Boltzmann BGK model, Physics of Fluids 9 (6) (1997) 1591–1598.
  • (14) S. Chen, D. Martínez, R. Mei, On boundary conditions in lattice Boltzmann methods, Physics of Fluids 8 (9) (1996) 2527–2536.
  • (15) Z. Guo, C. Shu, Lattice Boltzmann method and its applications in engineering, Advances in computational fluid dynamics, World Scientific, Singapore, 2013.
  • (16) A. Mohamad, Lattice Boltzmann Method: Fundamentals and Engineering Applications with Computer Codes, SpringerLink : Bücher, Springer London, 2011.
  • (17) P. L. Bhatnagar, E. P. Gross, M. Krook, A model for collision processes in gases. I. Small amplitude processes in charged and neutral one-component systems, Physical Review 94 (3) (1954) 511–525.
  • (18) Y. H. Qian, D. D’Humières, P. Lallemand, Lattice BGK models for Navier-Stokes equation, EPL (Europhysics Letters) 17 (6) (1992) 479.
  • (19) S. Succi, The Lattice Boltzmann Equation for Fluid Dynamics and Beyond (Numerical Mathematics and Scientific Computation), 1st Edition, Numerical mathematics and scientific computation, Oxford University Press, 2001.
  • (20) Q. Zou, S. Hou, S. Chen, G. D. Doolen, A improved incompressible lattice Boltzmann model for time-independent flows, Journal of Statistical Physics 81 (1-2) (1995) 35–48.
  • (21) X. He, L.-S. Luo, Lattice Boltzmann model for the incompressible Navier–Stokes equation, Journal of Statistical Physics 88 (3-4) (1997) 927–944.
  • (22) P. Lallemand, L.-S. Luo, Theory of the lattice Boltzmann method: Dispersion, dissipation, isotropy, Galilean invariance, and stability, Physical Review E 61 (6) (2000) 6546–6562.
  • (23) X. He, G. D. Doolen, T. Clark, Comparison of the lattice Boltzmann method and the artificial compressibility method for Navier-Stokes equations, Journal of Computational Physics 179 (2) (2002) 439–451.
  • (24) G. Wellein, T. Zeiser, G. Hager, S. Donath, On the single processor performance of simple lattice Boltzmann kernels, Comput. Fluids 35 (8-9) (2006) 910–919.
  • (25) G. Hager, G. Wellein, Introduction to High Performance Computing for Scientists and Engineers, 1st Edition, CRC Press, Inc., Boca Raton, FL, USA, 2010.
  • (26) P. Rinaldi, E. Dari, M. Vénere, A. Clausse, A lattice-Boltzmann solver for 3D fluid simulation on GPU, Simulation Modelling Practice and Theory 25 (0) (2012) 163–171.
  • (27) T. Pohl, M. Kowarschik, J. Wilke, K. Iglberger, U. Rüde, Optimization and profiling of the cache performance of parallel lattice Boltzmann codes, Parallel Processing Letters 13 (04) (2003) 549–560.
  • (28) M. Stürmer, J. Götz, G. Richter, A. Dörfler, U. Rüde, Fluid flow simulation on the Cell broadband engine using the lattice Boltzmann method, Computers & Mathematics with Applications 58 (5) (2009) 1062–1070, mesoscopic Methods in Engineering and Science.
  • (29) NVIDIA Corporation, Whitepaper NVIDIA’s next generation CUDA compute architecture: Kepler GK110 the fastest, most efficient HPC architecture ever built, Tech. rep. (2012).