SVE-enabling Lattice QCD Codes

01/22/2019 ∙ by Nils Meyer, et al. ∙ Forschungszentrum Jülich 0

Optimization of applications for supercomputers of the highest performance class requires parallelization at multiple levels using different techniques. In this contribution we focus on parallelization of particle physics simulations through vector instructions. With the advent of the Scalable Vector Extension (SVE) ISA, future ARM-based processors are expected to provide a significant level of parallelism at this level.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Understanding the nature of the strong interactions, one of the four fundamental interactions in physics, is still an important challenge. Quantum Chromodynamics (QCD), the theory believed to describe these interactions, was already established in the 1970s. For many cases large-scale numerical simulations are needed to study QCD. To facilitate such simulations the theory is formulated in a discretized and computer-friendly version called Lattice QCD (LQCD) [1]. For state-of-the-art research in LQCD supercomputers with a throughput of O(10) PFlop/s are used.

With the architectures for such high-end supercomputers becoming more parallel and complex, the design of software exploiting these machines has become a challenge for researchers in the field of LQCD. To share the burden of designing highly scalable applications, several efforts on creating community codes have been started to provide libraries that can be used for applications of different research groups. Examples are the Chroma software system [2] or the QUDA library [3]. In this contribution, we focus on a recent effort called Grid [4].

Grid is a framework for LQCD simulations that was designed for processor architectures featuring very wide SIMD instructions, such as AVX-512, a 512-bit wide ISA for x86 architectures. Future ARM-based processor architectures will support a vector ISA called Scalable Vector Extension (SVE) [5], which would allow for vectors of length up to 2048 bits.

In this paper we make the following contributions:

  1. We propose a strategy for porting the LQCD framework Grid to the SVE ISA and report on first experiences using the current development toolchain and available emulators.

  2. We discuss and analyze different ways of implementing complex arithmetics exploiting the SVE ISA.

  3. Based on different porting strategies we analyze and demonstrate that the SVE ISA allows for an efficient implementation of key computational patterns used in LQCD applications.

This paper is organized as follows: In Section II we provide a brief introduction to LQCD, its main numerical kernel and the domain-specific software framework considered in this paper, namely Grid. In Section III we highlight the most important features of SVE, and in Section IV we provide various SVE code examples. In Section V we document our strategy for porting Grid and report in Section V-D on how we verified our SVE-enabled version thereof. In Section VI we provide a brief overview of related work before concluding in Section VII.

Ii LQCD Simulations and Grid

Ii-a Overview

A significant fraction of time-to-solution of LQCD applications is spent in solving a linear set of equations, for which iterative solvers like Conjugate Gradient are used. The most compute-intensive task typically is the product of the lattice Dirac operator and a quark field .111In the following we focus on a particular formulation of LQCD using so-called Wilson fermions. A quark field is defined at lattice site and carries so-called color indices and spinor indices . Thus, is a vector with complex entries. denotes the size of the 4-dimensional lattice. Today’s state-of-the-art simulations use lattices with a minimal size of . The so-called hopping term of the Dirac operator acts on as follows:

(1)

Here, labels the four space-time directions, and are the SU(3) gauge matrices associated with the links between nearest-neighbor lattice sites. The gauge matrices carry color indices and are represented by matrices with complex entries. The are the (constant) Dirac matrices, carrying spinor indices.

Parallelization of the matrix-vector product is achieved by a domain decomposition in 1 to 4 dimensions. Therefore, the larger the lattice size the higher the intrinsic level of parallelism. In particular, due to the regular structure of the problem, parallelization can be performed at multiple levels. For the coarsest level a set of sub-lattices is distributed over (a very large number of) different processes, e.g., different MPI ranks. Further parallelization within a process is achieved through thread-level parallelization, e.g., using OpenMP, as well as through vectorization at the instruction level.

Ii-B Data Layout

For implementing parallelization the choice of a suitable data layout is crucial. For instance, in case of vectorization a distribution of data such that data for neighboring lattice sites are distributed over a single vector results in the need for combining different elements of the same vector for a single application of the hopping term as defined in Eq. (1). To address this problem, Grid implements the concept of “virtual nodes.” Within a single thread the sub-lattice is distributed over a set of such virtual nodes as shown in Fig. 1, where the number of virtual nodes per thread is typically equal to the number of vector elements. By keeping the size of the sub-lattice processed by a single virtual node sufficiently large, neighboring lattice sites will be assigned to different vectors.

Fig. 1: Decomposing a sub-lattice over multiple virtual nodes [4].

Ii-C Architecture-Specific Implementations

Grid is designed to maximize the flexibility in choosing the data layout optimal for parallelizing on a given architecture without compromising on portability. By implementing a suitable abstraction layer based on C++ template expressions, the complexity is hidden from the user.

The software is organized such that the machine-specific aspects are confined to a small number of lines of code in a small number of files. Inline assembly is used for the implementation of the Dirac operator in Eq. (1) for some instruction-set architectures, e.g., AVX-512. The assembly is optimized for register and cache reuse. All the other machine-specific code is implemented using intrinsics, including:

  • arithmetics of real and complex numbers,

  • permutations of vector elements,

  • load, store, memory prefetch, streaming memory access,

  • conversion of floating-point precision.

Machine-specific implementations exist for a variety of Intel architectures, ARM NEONv8 and also IBM BlueGene/Q. Table I shows the architectures supported by Grid at the time of writing this contribution.

SIMD family Vector length
Intel SSE4 128 bit
Intel AVX/AVX2 256 bit
Intel ICMI, AVX-512 512 bit
IBM QPX 256 bit
ARM NEONv8 128 bit
generic C/C++ architecture independent,
user-defined array size
TABLE I: Architectures supported by Grid.

Iii ARM Scalable Vector Extension

The ARM Scalable Vector Extension (SVE) is a novel vector extension for ARMv8 architectures. The SVE ISA facilitates a significantly higher single-core performance and thus targets applications with high demand for computational power, such as high-performance computing but also machine learning 

[5].

Iii-a Features of SVE

For a comprehensive list of features of SVE we refer to [5]. Here we list the features of the SVE ISA we believe to be beneficial for LQCD applications:

  • wide vector units,

  • structure load/store instructions supporting load/store of an array of -element structures into vectors, with one vector per structure element,

  • vectorized 16-, 32-, 64-bit floating-point operations, including arithmetic operations and conversion of precision,

  • vectorized arithmetic of complex numbers.

Convenient access to features of SIMD extensions is typically provided by intrinsics, i.e., built-in functions handled specially by the compiler. The ARM C Language Extensions (ACLE) for SVE intrinsics provide access to features of the SVE hardware in C/C++ [6].

Iii-B Vector-Length Agnosticism

SVE does not define the size of the vector registers, but constrains it to a range of possible values, from a minimum of 128 bits up to a maximum of 2048 in multiples of 128. The silicon provider chooses the vector-register length and defines the performance characteristics of the hardware. SVE pursues a so-called Vector-Length Agnostic (VLA) programming model that allows code execution to dynamically adapt to the available vector length at runtime. To achieve this SVE implements predication registers for masking vector lanes for operations on partial vectors.

Iii-C Limitations on Usage of SVE ACLE Data Types

The SVE vector length is unknown at compile time.222 This statement is valid for the ARM clang SVE compiler. Other (commercial or future) SVE compilers might be aware of the SVE vector length. Therefore SVE ACLE data types do not have a defined size. A comprehensive list on the usage of these data types, also referred to as ”sizeless structs,” is provided in [6]. In the following we restrict ourselves to limitations on the usage of ACLE relevant for enabling SVE in Grid. SVE ACLE data types may not be used:

  • as data members of unions, structures, and classes,

  • to declare or define a static or thread-local storage variable,

  • as the argument to sizeof.

Implications of and solutions for these limitations are discussed in Section V-A.

Iii-D Complex Arithmetic

The SVE ISA supports vectorized arithmetic of complex numbers for 16-bit, 32-bit, and 64-bit floating-point data types. This feature is of interest since complex multiplications, depending on the data layout chosen, may require combining different elements of the same vector. Without specific support for complex arithmetics additional instructions may be required to re-order the vector elements.

Let denote vectors of complex numbers and the imaginary unit. SVE complex arithmetic includes:

  • vectorized add/sub of complex numbers,

  • vectorized fused multiply-add/sub of complex numbers,

In this contribution we focus on complex multiply-add/sub. The FCMLA

instruction takes three vector registers as parameters, with the real components in even elements and the imaginary components in odd elements. A fourth (immediate) parameter specifies discrete rotation of the second input vector in the complex plane. We refer to the SVE ACLE specification for details on the usage of the

FCMLA instruction [6]. For example, the following complex calculations are enabled by concatenating two FCMLA instructions each (the asterisk denotes complex conjugation):

(2)

Complex multiplication is achieved by setting in Eq. (2).

Iv SVE Code Examples

In this section we present simple C++ code examples and illustrate the binary code generated by the ARM clang SVE compiler. We also show how to exploit the SVE ISA for complex arithmetics using ACLE. The examples are relevant for Grid.

We used the ARM armclang 18.3 SVE compiler to generate the binaries. The compiler is based on clang/LLVM 5.0.1. For the compilation process we used optimization level Ofast and defined the target architecture as arch=armv8-a+sve. The settings enable auto-vectorization for SVE. The armclang 18 compiler is not aware of the hardware implementation of the vector length and optimizes following the VLA paradigm.

For verification of the SVE binary code we used the ARM instruction emulator (ArmIE) 18.1. The emulator allows for functional code verification by emulating SVE instructions on AArch64 platforms. The SVE vector length is supplied to ArmIE as a command-line parameter. We tested our examples emulating multiple vector lengths.

Iv-a Real Arithmetics

As the first example of SVE we consider pairwise multiplication of array elements, with the result being stored in a third array. The following code shows the C++ implementation of the operation for arrays of the data type double with elements each:

The following listing shows the assembly generated by the compiler:

We briefly discuss the assembly. First, the zero register xzr is copied into the loop counter register x8 (line 1). The whilelo instruction compares the counter status and the length of the arrays, which is stored in register x0. Relevant bits of the predication register p1 are set to true (line 2). The ptrue instruction sets all bits in the predication register p0 to true (line 3).

In the loop body, the predicated ld1d instructions load slices of the arrays and into the vector registers z0 and z1, respectively (lines 5–6). Inactive vector elements are set to zero in the target registers, as indicated by p1/z. All vector elements are multiplied pairwise using the unpredicated instruction fmul (line 7). The predicated store instruction st1d writes the active elements of the result vector to memory (line 8). The loop counter register x8 is incremented by the SVE vector length (in double) (line 9). The predication for the next loop iteration is assembled (lines 10–12). The loop is iterated until all array elements are processed.

It is important to note that the SVE vector length does not appear explicitly. The number of loop iterations is determined by the vector length implemented in the hardware. Predicated operations eliminate the need for tail recursion, which is required on some other SIMD architectures if the last remaining fraction of the data do not fit exactly into the vector registers.

Iv-B Complex Arithmetics

As an example of complex arithmetics we consider pairwise multiplication of complex array elements, with the result being stored in a third array. The following code shows the C++ implementation of the operation for arrays of the data type std::complex<double> with elements each:

The following listing shows the assembly generated by the compiler:

We briefly discuss the assembly, focusing on the differences to multiplication of real array elements shown in Section IV-A.

First, the loop counter and the predication registers are initialized (lines 1–3). In the loop body, the predicated structure load instructions ld2d load slices of the two-element arrays and into four vector registers. The real parts of the arrays and are loaded into the vector registers z0 and z2, respectively (line 6). The imaginary parts of the arrays and are loaded into the vector registers z1 and z3, respectively (line 7). Processing continues with real arithmetics, including multiplication, multiply-add, and multiply-subtract (lines 10–15). The real parts of the result are stored in vector register z6. The imaginary parts of the result are stored in vector register z7. The result vectors are written back to memory using the predicated structure store instruction st2d. This instruction reassembles two-element structures from two vector registers and writes them into contiguous memory (line 16). The loop body is iterated until all array elements are processed.

We note that the ARM SVE compiler generates assembly using structure load/store. Real arithmetics is used for data processing. The compiler does not exploit the full SVE ISA, which comprises dedicated instructions for complex arithmetics. The reason is the lack of support for complex arithmetics in the LLVM 5 backend of the compiler.

Iv-C Complex Arithmetics using SVE ACLE (I)

As an example of complex arithmetics using SVE ACLE we consider pairwise complex multiplication of arrays of complex numbers, with the result being stored in a third array. In this example we implement complex numbers in arrays of double with elements each. Real and imaginary parts of the array are interleaved . We note that this implementation is equivalent to using arrays with elements of std::complex<double> each. The following code shows the C++ implementation of the operation using SVE ACLE:

We briefly discuss the details of this implementation. At first the predication pg and SVE ACLE data types are declared (lines 2–4). We use the for-loop for our implementation of complex multiplication. The loop counter is incremented after each loop iteration calling svcntd() (line 6). This intrinsic function returns the SVE vector register length (in double). In the loop body, we use the intrinsic function svld1() to load slices of the arrays and without decomposing the array elements (lines 8–9). Computation proceeds with multiply-add of complex numbers using two calls to svcmla() (the intrinsic function for the FCMLA instruction introduced in Section III-D) (lines 10–11). The first vector operand of the first FCMLA instruction consists of zeros, resulting in complex multiplication adding zero. The result vector is stored back into contiguous memory using the predicated svst1() function (line 12). The loop body is iterated until all data are processed.

The following listing shows the assembly generated by the compiler:

We briefly discuss the assembly. All function calls to SVE ACLE instrinsic functions in the C++ code are directly translated into assembly. No additional SVE instructions are generated. We conclude that hardware support for complex arithmetic is accessible by using SVE ACLE.

Iv-D Complex Arithmetics using SVE ACLE (II)

As the last example of complex arithmetics using SVE ACLE we again consider pairwise complex multiplication of arrays of complex numbers, with the result being stored in a third array. This example is almost identical to Section IV-C. However, here we omit the for-loop and use the full SVE vector length implemented in the hardware for computation. This implementation mimics programming for fixed-size SIMD registers and is eminently suitable for small arrays of the size of vector registers. The following code shows the C++ implementation of the operation using SVE ACLE:

The following listing shows the assembly generated by the compiler:

We conclude that for small arrays of the size of the SVE vector length it is possible to omit the loop overhead implied by the VLA programming model. We note that the resulting binaries will only be operating correctly on matching SVE hardware.

V SVE-enabling Grid

V-a Strategies for enabling SVE in Grid

As described in Section II-B, Grid adapts the data layout to the available vector length. Hence we have to set a vector length at compile time, despite SVE being vector-length agnostic.

To enable SVE optimizations within Grid we have two options. First, we can use Grid’s generic implementation without any architecture-specific optimizations, relying on the auto-vectorization capabilities of the armclang compiler. Second, we add a SVE-specific implementation to Grid’s lower-level abstraction layer described in Section II-C

. Current compiler heuristics are not good enough to generate SVE instructions for complex arithmetic, as shown in Section

IV-B. Therefore we decided to use ACLE to enable hardware support for complex arithmetics.

The core of Grid’s abstraction layer is a template class that enables direct access to vector registers using intrinsic data types. These data types are declared as member data. An example is __m512d, which defines a vector of 8 double-precision floating-point numbers on AVX-512 architectures. This implementation scheme is feasible due to the compiler’s capability of auto-generating loads (stores) from (to) the intrinsic data types.

For SVE this kind of implementation is not feasible because sizeless data types cannot be used as class member data. Therefore we use ordinary arrays as class member data and implement SVE ACLE only for data processing within functions. Data processing was optimized for arrays of the size of the vector registers. An example of how this is implemented was shown in Section IV-D.

V-B Implementation Details

As proposed in the last section we do not follow the VLA programming model. Instead, our implementation is bound to the vector length of the target hardware. Therefore, the Grid binaries are not necessarily portable across different platforms. However, our SVE implementation of Grid is portable at the cost of full compilation of the code base. This is not a problem since the compilation time of Grid is insignificant compared to the time needed to perform LQCD simulations.

To fix the SVE implementation of Grid to the target hardware we introduce the compile-time constant SVE_VECTOR_LENGTH, which represents the SVE vector length in bytes. At the time of writing this contribution SVE is enabled in Grid for 128-bit, 256-bit, and 512-bit vector implementations. We note that at present Grid only supports up to 512-bit architectures. Wider vectors (e.g., 1024 bit), are possible but specialization of some of the lower-level functionality is necessary.

We introduce a templated C++ structure vec<T> with (aligned) ordinary array v as member data. By definition the array is always of the size of the SVE vector length, irrespective of the data type in use. Specializations of the template typename T support 64-bit, 32-bit, 16-bit floating-point numbers and 32-bit integers. Grid does not support calculations using 16-bit floating-point numbers. This data type is used only for data compression upon data exchange over the communications network.

We exploit different features of [6], which we augmented by the utility C++ templated structure acle<T>. It is used to simplify mapping C++ data types in Grid to data types supported by SVE ACLE. It is also used to provide various definitions for predication.

V-C Code Example

Complex multiplication is implemented as a templated C++ function of the MultComplex structure. We use the vector-length specific implementation introduced in Section IV-D. The following listing shows how complex multiplication is implemented in the SVE-enabled version of Grid:

V-D Implementation Verification

Grid implements about 100 ready-made tests and benchmarks. We have selected 40 representative tests and benchmarks for verification of the SVE-enabled version of Grid for different SVE vector lengths using the ARM clang 18.3 compiler and the ARM SVE instruction emulator ArmIE 18.1. The SVE emulator allows for functional verification of the SVE code generated by the compiler. It also allows for defining the SVE vector length as a command line parameter.

The majority of tests and benchmarks complete with success. However, some tests fail due to incorrect results for some choices of the SVE vector length and implementations of the predication. We attribute the failing tests to minor issues of the ARM SVE toolchain, which is still under development.

V-E Alternative Implementation of Complex Arithmetics

The silicon provider determines the SVE vector length and also the performance characteristics of the hardware. The performance signatures of the instructions might differ across different SVE platforms. It is not guaranteed that the FCMLA instruction outperforms alternative implementations of complex arithmetics. Therefore, we have also implemented complex arithmetics based on instructions for real arithmetics at the cost of higher instruction count and cutting down on the effectiveness of SVE vector register usage.

Vi Related Work

Significant efforts have been made in the past by the LQCD community to provide domain-specific libraries optimized for specific architectures. One example is the QPhiX library [7] that was specifically designed for Intel’s Xeon Phi architecture. A more general approach targeting various x86 SIMD ISAs, but in particular AVX-512, is Grid [4]. Meanwhile exploratory studies have been performed to extend the portability of Grid to other types of architectures, including GPU-accelerated ones [8]. A much earlier effort targeting architectures comprising NVIDIA GPUs supporting CUDA resulted in the QUDA library [3], which has meanwhile been used for several generations of GPU-accelerated supercomputers.

Little work has been published so far on the evaluation of SVE for scientific computing applications, including LQCD. Some earlier work in this context has been published at last year’s edition of CLUSTER. In [9] the authors report on performance results for selected numerical kernels that were generated using a gem5 simulation setup.

Vii Conclusion and Future Work

In this contribution we provided a brief introduction into the pertinent features of applications for LQCD simulations. Focussing on the main computational task we showed how these applications could benefit from the ARM ISA extension SVE. By enabling the LQCD community code Grid for SVE,

we could explore how well this new ISA can be exploited.

The results are very promising. The source code is available [10].

At the time of writing this contribution, it is not yet possible to perform a reliable assessment of the performance of the SVE-enabled version of Grid due to the lack of processor architectures supporting the SVE architecture or simulators for such architectures.

Acknowledgment

We acknowledge the funding of the QPACE4 project provided by the Deutsche Forschungsgemeinschaft (DFG) in the framework of SFB/TRR-55. Furthermore, we acknowledge the support from the HPC tools team at ARM, in particular Ashok Bhat, Juan Gao, Assad Hashmi, and Will Lovett.

References

  • [1] K. G. Wilson, “Confinement of quarks,” Phys. Rev. D, vol. 10, pp. 2445–2459, 1974.
  • [2] R. G. Edwards and B. Joo, “The Chroma software system for lattice QCD,” Nucl. Phys. Proc. Suppl., vol. 140, p. 832, 2005.
  • [3] M. A. Clark, R. Babich, K. Barros, R. C. Brower, and C. Rebbi, “Solving Lattice QCD systems of equations using mixed precision solvers on GPUs,” Comput. Phys. Commun., vol. 181, pp. 1517–1528, 2010.
  • [4] P. Boyle, A. Yamaguchi, G. Cossu, and A. Portelli, “Grid: A next generation data parallel C++ QCD library,” ArXiv:1512.03487, 2015.
  • [5] N. Stephens, S. Biles, M. Boettcher, J. Eapen, M. Eyole, G. Gabrielli, M. Horsnell, G. Magklis, A. Martinez, N. Premillieu, A. Reid, A. Rico, and P. Walker, “The ARM Scalable Vector Extension,” IEEE Micro, vol. 37, no. 2, pp. 26–39, 2017.
  • [6] ARM, “ARM C Language Extensions for SVE,” Tech. Rep., 2017. [Online]. Available: https://developer.arm.com/docs/100987/latest/arm-c-language-extensions-for-sve
  • [7] QPhiX library. [Online]. Available: https://github.com/JeffersonLab/qphix
  • [8] P. A. Boyle, M. A. Clark, C. DeTar, M. Lin, V. Rana, and A. V. Avilés-Casco, “Performance Portability Strategies for Grid C++ Expression Templates,” EPJ Web Conf., vol. 175, p. 09006, 2018.
  • [9] Y. Kodama, T. Odajima, M. Matsuda, M. Tsuji, J. Lee, and M. Sato, “Preliminary Performance Evaluation of Application Kernels Using ARM SVE with Multiple Vector Lengths,” in 2017 IEEE International Conference on Cluster Computing (CLUSTER), 2017, pp. 677–684.
  • [10] SVE-enabled Grid. [Online]. Available: https://github.com/nmeyer-ur/Grid/tree/feature/arm-sve