Efficient Realization of Givens Rotation through Algorithm-Architecture Co-design for Acceleration of QR Factorization

03/14/2018 ∙ by Farhad Merchant, et al. ∙ Nanyang Technological University RWTH Aachen University 0

We present efficient realization of Generalized Givens Rotation (GGR) based QR factorization that achieves 3-100x better performance in terms of Gflops/watt over state-of-the-art realizations on multicore, and General Purpose Graphics Processing Units (GPGPUs). GGR is an improvement over classical Givens Rotation (GR) operation that can annihilate multiple elements of rows and columns of an input matrix simultaneously. GGR takes 33 multiplications compared to GR. For custom implementation of GGR, we identify macro operations in GGR and realize them on a Reconfigurable Data-path (RDP) tightly coupled to pipeline of a Processing Element (PE). In PE, GGR attains speed-up of 1.1x over Modified Householder Transform (MHT) presented in the literature. For parallel realization of GGR, we use REDEFINE, a scalable massively parallel Coarse-grained Reconfigurable Architecture, and show that the speed-up attained is commensurate with the hardware resources in REDEFINE. GGR also outperforms General Matrix Multiplication (gemm) by 10 Gflops/watt which is counter-intuitive.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 4

page 5

page 7

page 9

page 11

page 13

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

QR factorization/decomposition (QRF/QRD) is a prevalent operation encountered in several engineering and scientific operations ranging from Kalman Filter (KF) to computational finance

[1][2]. QR factorization of a non-singular matrix of size is given by

(1)

where is an orthogonal and is upper triangular matrix. There are mainly three methods to compute QR factorization, 1) Householder Transform (HT), 2) Givens Rotation (GR), and 3) Modified Gram-Schmidt (MGS). MGS is used in the embedded systems where numerical accuracy of the final solution is not critical, while HT is employed in High Performance Computing (HPC) applications since it is numerically stable operation. GR is applied in the application domains pertaining to embedded systems where numerical stability of the end solution is critical [3]. We sense here an opportunity in GR for applicability in domains beyond embedded systems. Specifically, we foresee opportunity in GR in generalization for annihilation of multiple elements of an input matrix simultaneously where annihilation regime spans over columns and rows. It is intended to expose higher parallelism in classical GR through generalization and also reduction in total number computations in GR. Such a generalization is possible by combining several Givens sequences and performing common computations required for updating trailing matrix beforehand. Surprisingly, such an approach is nowhere presented in the literature and that has reduced relevance of GR in several application domains. It has been emphasized in the literature that GR is more suitable for orthogonal decomposition of sparse matrices while for orthogonal decomposition of dense matrices, HT is more suitable over GR [4][5]. For implementations on multicore and General Purpose Graphics Processing Units (GPGPUs), a library based approach is followed for Dense Linear Algebra (DLA) computations where highly tuned packages based on specifications given in Basic Linear Algebra Subprograms (BLAS) and Linear Algebra Package (LAPACK) are developed [6]. Several realizations of QR factorization are discussed in section 2.3. Typically, block QR factorization routine (xgeqrf - where x indicates double/single precision) that is part of LAPACK is realized as a series of BLAS calls. dgeqr2 and dgeqrf operations are shown in figure 1.

Fig. 1: dgeqr2 and dgeqrf Operations

In dgeqr2, Double Precision Matrix-vector (dgemv) operation is dominant while in dgeqrf, Double Precision Matrix Multiplication (dgemm) is dominant as depicted in the figure

1. dgemv can attain up to 10% of the theoretical peak in multicore platforms and 15-20% in GPGPUs while dgemm can attain up to 40-45% in multicore platforms and 55-60% in GPGPUs at and respectively [7]. Considering low performance of multicores and GPGPUs for critical DLA computations, it could be a prudent approach to move away from traditional BLAS/LAPACK based strategy in software and accelerate these computations on a customizable platform that can attain order of magnitude higher performance than state-of-the-art realizations of these software packages. A special care has to be taken in designing of an accelerator that is capable of attaining desired performance while maintaining generality of the accelerator in also supporting other operations in the domain of DLA computations. Coarse-grained Reconfigurable Architectures (CGRAs) are a good candidate for the domain of DLA computations since they are capable of attaining performance of Application Specific Integrated Circuits (ASICs) while flexibility of Field Programmable Gate Arrays (FPGAs) [8][9][10]. Recently, there have been several proposals in the literature in developing BLAS and LAPACK on custamizable CGRA platforms through algorithm-architecture co-design where macro operations in the operations pertaining to DLA computations are identified and realized on a Reconfigurable Data-path (RDP) that is tightly coupled to the processor pipeline [11][12]. In this paper, we focus on acceleration of GR based QR factorization, where classical GR is generalized to achieve Generalized Givens Rotation (GGR) where GGR has 33% lesser multiplications compared to GR. Several macro operations in GGR are identified and realized on RDP to achieve superior performance compared to Modified Householder Transform (MHT) presented in [7]. Major contributions in this paper to achieve efficient realization of GR based QR factorization are as follows:

  • We improvise over Column-wise Givens Rotation (CGR) presented in [13] and present GGR. While CGR is capable of simultaneous annihilation of multiple elements of a column in the input matrix, GGR can annihilate multiple elements of rows and columns simultaneously

  • Several macro operations in GGR are identified and implemented on an RDP that is tightly coupled to pipeline of a Processing Element (PE) resulting in 81% of the theoretical peak in PE. This implementation outperforms Modified Householder Transform (MHT) based QR factorization (dgeqr2ht) implementation presented in [7] by 10% in PE. GGR based QR factorization also outperforms dgemm in PE by 10% which is counter-intuitive

  • Arguably, moving away from BLAS for realization of GGR based QR factorization attains 10% higher performance than the classical way of implementation where Level-3 BLAS is used as a dominant operation in the state-of-the-art software packages for multicore and GPGPUs. This claim is validated by several case studies on multicore and GPGPUs where it is also shown that moving away from BLAS and LAPACK in these platforms does not yield performance improvement

  • For parallel realization in REDEFINE, we attach PE in REDEFINE framework where 10% higher performance is attained over dgeqr2ht implementation presented in [7]. We show that sequential realization in PE and parallel realization of GGR based QR factorization in REDEFINE are scalable. Furthermore, it is shown that the speed-up in parallel realization in REDEFINE over sequential realization in PE is commensurate with the hardware resources employed in REDEFINE and the speed-up asymptotically approaches theoretical peak of REDEFINE CGRA

For our implementations in PE and REDEFINE, we have used double precision Floating Point Unit (FPU) presented in [14] with recommendations presented in [15]. Organization of the papers is as follows: In section 2, we discuss about CGR, REDEFINE and some of the FPGA, multicore, and GPGPU based realizations of QR factorization. Case studies on dgemm, dgeqr2, dgeqrf, dgeqr2ht, and dgeqrfht are presented in section 3. GGR and implementation of GGR in multicore, GPGPU, and PE is discussed in 4. Parallel realization of GGR in REDEFINE CGRA is discussed in 5. We conclude our work in section 6.

Abbreviations/Nomenclature:

Abbreviation/Name Expansion/Meaning
AVX Advanced Vector Extension
BLAS Basic Linear Algebra Subprograms
CE Compute Element
CFU Custom Function Unit
CGRA Coarse-grained Reconfigurable Architecture
CPI Cycles-per Instruction
CUDA Compute Unified Device Architecture
EREW-PRAM Exclusive-read Exclusive-write Parallel Random Access Machine
DAG Directed Acyclic Graph
FMA Fused Multiply-Add
FPGA Field Programmable Gate Array
FPS Floating Point Sequencer
FPU Floating Point Unit
GPGPU General Purpose Graphics Processing Unit
GR Givens Rotation
GGR Generalized Givens Rotation
HT Householder Transform
ICC Intel C Compiler
IFORT Intel Fortran Compiler
ILP Instruction Level Parallelism
KF Kalman Filter
LAPACK Linear Algebra Package
MAGMA Matrix Algebra on GPU and Multicore Architectures
MGS Modified Gram-Schmidt
MHT Modified Householder Transform
NoC Network-on-Chip
PE Processing Element
PLASMA Parallel Linear Algebra Software for Multicore Architectures
QUARK Queuing and Runtime for Kernels
RDP Reconfigurable Data-path
XGEMV/xgemv Single/Double Precision General Matrix-vector Multiplication
XGEMM/xgemm Single/Double Precision General Matrix Multiplication
XGEQR2/dgeqr2 Single/Double Precision QR Factorization based on Householder Transform (with XGEMV)
XGEQRF/xgeqrf Blocked Single/Double Precision QR Factorization based on Householder Transform (with XGEMM)
XGEQR2HT/xgeqr2ht Single/Double Precision QR Factorization based on Modified Householder Transform
XGEQR2GGR/ xgeqr2ggr Single/Double Precision QR Factorization based on Generalized Givens Rotation
XGEQRFHT/xgeqrfht Blocked Single/Double Precision QR Factorization based on Modified Householder Transform (with XGEMM)
XGEQRFGGR/ xgeqrfggr Blocked Single/Double Precision QR Factorization based on Generalized Givens Rotation (with XGEMM)
PACKAGE_ROUTINE/ package_routine Naming convention followed for different routines pertaining to different packages. E.g., BLAS_DGEMM/ blas_dgemm is a Double Precision General Matrix Multiplication Routine in BLAS

2 Background and Related Work

CGR presented in [13] is discussed in section 2.1 and REDEFINE CGRA is discussed in section 2.2. A detailed review of yesteryear realizations of QR factorization is presented in section 2.3.

2.1 Givens Rotation based QR Factorization

For a matrix , applying Givens sequences simultaneously yields to the matrix shown in equation 2.

Fig. 2: One Iteration of Column-wise Givens Rotation
(2)

Corresponding Directed Acyclic Graphs (DAGs) for CGR for annihilation of , , and and update of the second column of the matrix are shown in figure 1. For an input matrix of size classical GR takes sequences while CGR takes sequences. Furthermore, if the number of multiplications in GR is and number of multiplications in CGR is then

(3)
(4)

Taking ratio of equation 3 and 4

(5)

From equation 5, as , . As we increase the size of the matrix, the number of multiplications in CGR asymptotically approaches to times the number of multiplications in GR. Implementation details of CGR on systolic array and in REDEFINE can be found in [13].

Fig. 3: REDEFINE Framework

2.2 Redefine Cgra

REDEFINE CGRA is a customizable massively parallel Multiprocessor System on Chip (MPSoC) where several Tiles are connected through Network-on-Chip (NoC) [16]. Each Tile consists of a Compute Element (CE) and a Router. CEs in REDEFINE can be enhanced to support several applications domains like signal processing and Dense Linear Algebra (DLA) computations by attaching domain specific Custom Function Units (CFUs) to the CEs [17]. REDEFINE framework is shown in figure 3.

(a) Performance Comparison of PE with Other Platforms for dgemm and dgeqr2ht
(b) Performance of PE In-terms of Theoretical Peak Performance in PE for dgemm
Fig. 4: Performance and Performance Comparison of PE where dgeqr2ht is Modified Householder Transform based QR Factorization

A Reconfigurbale Data-path is tightly coupled to a Processing Element (PE) that is CFU for REDEFINE as shown in the figure 3. Performance of PE in dgemm and MHT based QR factorization (dgeqr2ht) is shown in figure 4(a) [7]. Performance of PE over several Architectural Enhancements (AEs) is shown in figure 4(b). It can be observed in the figure 4(a) that PE attains 3-100x better performance in dgemm and dgeqr2ht while PE with tightly coupled RDP is capable of attaining 74% of the theoretical peak performance of PE in dgemm as shown in figure 4(b). Performance in dgemm an dgeqr2ht is attained through algorithm-architecture co-design where macro operations in dgemm and dgeqr2ht are identified and realized on RDP. We apply similar technique in this exposition for GR where we first present GGR and identify macro operations in GGR that are realized on RDP. GGR implementation (dgeqr2ggr) outperforms dgeqr2ht and dgemm. Further details of dgemm and dgeqr2ht realizations can be found in [17], [18], [19], and [7].

2.3 Related Work

Due to wide range of applications in the embedded systems domain, GR has been studied extensively in the literature specifically for implementation purpose since it was first proposed in [20]. An alternate ordering of Givens sequences was presented in [21]. According to the scheme presented in [21], the alternate ordering is amenable to parallel implementation of GR while it does not focus on fusing several Givens sequences to annihilate multiple elements. For an input matrix of , the alternate ordering presented in [21] can annihilate maximum

elements in parallel by executing disjoint Givens sequences simultaneously. Pipeline Given sequences for computing QR decomposition is presented in

[22] where its is proposed to execute Givens sequences in pipeline fashion to update the pair of rows partially updated by the previous Givens sequences. Analysis of this strategy is presented for Exclusive-read Exclusive-write Parallel Random Access Machine (EREW-PRAM) that shows that the pipeline strategy is twice as fast compared to the classical GR. Greedy Givens algorithm is presented in [23] that executes compound disjoing Givens sequences in parallel assuming unlimited parallelism case. A high speed tournament GR and VLSI implementation of tournament GR is presented in [3] where a significant improvement is reported in ASIC over triangular systolic array. The scheduling scheme presented in [3] is similar to the one presented in [21] where disjoint Givens sequences are applied to compute QR decomposition. FPGA implementation of GR is presented in [24] while ASIC implementation of square-root free GR is presented in [25]. A novel technique to avoid underflow/overflow in computation of QR decomposition using classical GR is presented in [26] that results in numerical stable realization of GR in LAPACK. A two-dimensional systolic array implementation of GR is presented in [27] where classical GR is implemented on two-dimensional systolic array with diagonal elements of the array performs complex operations like square root and division while the rest of the array performs matrix update. Restructuring of tridiagonal and bidiagonal algorithms for QR decomposition is presented in [28]. The restructuring strategy presented in [28] has several advantages like it is capable of exploiting vector instructions in the modern architectures, reduces memory operations, and the matrix updates are in the form of Level-3 BLAS thus capable of exploiting cache architecture through reuse of data compared to classical GR where it is Level-2 BLAS. Although, the scheme presented in [28] re-arranges memory bound operations like Level-2 BLAS in classical GR to compute bound operations like Level-3 BLAS, the scheme does not reduce computations unlike GGR. In general, works related to GR in the literature focus on different scheduling schemes for parallelism and exploitation of architectural features for the targeted platform. In case of GGR, total work is reduced compared to the classical GR while also being architecture platform friendly. For implementation of GR, there has been no work in the literature where macro operations in the routine are identified and realized carefully for performance.

3 Case Studies

For our experiments on multicore and GPGPUs we use hightly tuned software packages PLASMA and MAGMA. Software stacks of these packages are shown in figure 5.

(a) PLASMA Software Stack
(b) MAGMA Software Stack
Fig. 5: PLASMA and MAGMA Software Stacks

Performance of PLASMA and MAGMA depends on efficiency of underlying BLAS realization as well as realization of scheduling schemes for multicore and GPGPUs. For implementation of GGR in PLASMA and MAGMA for multicore and GPGPUs, we add routines to BLAS and LAPACK. Performance in-terms of theoretical peak performance in dgemm, dgeqr2, and dgeqrf is depicted in figure 6(a) and performance in-terms of Gflops/watt for these routines is depicted in figure 6(b).

(a) Performance In-terms of Theoretical Peak Performance of Underlying Platform in dgemm, dgeqr2, and dgeqrf in LAPACK, PLASMA, and MAGMA
(b) Performance In-terms of Gflops/watt in dgemm, dgeqr2, and dgeqrf in LAPACK, PLASMA, and MAGMA
Fig. 6: Performance in dgemm, dgeqr2, and dgeqrf in LAPACK, PLASMA, and MAGMA

3.1 dgemm

dgemm is a Level-3 BLAS routine that has three loops and time complexity of for a matrix of size . Performance of dgemm in LAPACK in-terms of theoretical peak of underlying platform is shown in figure 6(a) and in-terms of Gflops/watt in MAGMA is shown in figure 6(b). It can be observed in the figure 6(a) that the performance attained by dgemm in Intel Core i7 and Nvidia Tesla C2050 is hardly 25% and 57% respectively. In-terms of Gflops/watt it is 1.22 in Nvidia Tesla C2050. Due to trivial nature of dgemm algorithm, we do not reproduce dgemm algorithm here while standard dgemm algorithm can be found in [29].

3.2 dgeqr2

Allocate memory for input/output matrices and vectors
for  to  do
     Compute Householder vector
     Compute where
     Update trailing matrix using dgemv
end for
Algorithm 1 dgeqr2 (Pseudo code)

Pseudo code of dgeqr2 is described in algorithm 1. It can be observed in the pseudo code in the algorithm 1 that, it contains three steps, 1) computation of a householder vector for each column 2) computation of householder matrix , and 3) update of trailing matrix using where

is an identity matrix. For our experiments, we use Intel C Compiler (ICC) and Intel Fortran Compiler (IFORT). We also use different compiler switches to improve the performance of dgeqr2 in LAPACK on Intel micro-architectures. In Intel Core i7

Gen machine which is a Haswell micro-architecture, CPI attained saturates at [7]. In case when compiler switch is used that enables use of Advanced Vector Extensions (AVX) instructions, the Cycles-Per-Instruction (CPI) attained is increased. This behavior is due to AVX instructions that use Fused Multiply Add (FMA). Due to this fact, the CPI reported by VTune™can not be considered as a measure of performance for the algorithms and hence we accordingly double the instruction count reported by Intel VTune™.

In case of GPGPUs, dgeqr2 in MAGMA is able to achieve up to 16 Gflops in Tesla C2050 which is 3.1% of the theoretical peak performance of Tesla C2050 while performance in terms of Gflops/watt is as low as 0.04 Gflops/watt.

3.3 dgeqrf

1:Allocate memories for input/output matrices
2:for  to  do
3:     Compute Householder vectors for block column
4:     Compute matrix where is Computed using Householder vectors
5:     Update trailing matrix using dgemm
6:end for
Algorithm 2 dgeqrf (Pseudo Code)

Pseudo code for dgeqrf routine is shown in algorithm 2. In terms of computations, there is no difference between algorithms 1, and 2. In a single core implementation, dgeqrf is observed to be 2-3x faster than dgeqr2. The major source of efficiency in dgeqrf is efficient exploitation of processor memory hierarchy and dgemm routine which is a compute bound operation [30][31]. In Nvidia Tesla C2050, dgeqrf in MAGMA is able to achieve up to 265 Gflops which is 51.4 % of theoretical peak performance of Nvidia Tesla C2050 as shown in the figure 6(a) which is 90.5% of the performance attained by dgemm. In dgeqr2 in MAGMA, performance attained in terms of Gflops/watt is as low as 0.05 Gflops/watt while for dgemm and dgeqrf it is 1.23 Gflops/watt and 1.09 Gflops/watt respectively in Nvidia Tesla C2050 as shown in figure 6(b). In case of dgqrf in PLASMA, the performance attained is 0.39 Gflops/watt while running dgeqrf in four cores.

3.4 dgeqr2ht

1:Allocate memories for input/output matrices
2:for  to  do
3:     Compute Householder vectors for block column
4:     Compute where
5:end for
Algorithm 3 dgeqr2ht (Pseudo Code)

Pseudo code for dgeqr2ht is shown in algorithm 3. A clear difference between dgeqr2 in the algorithm 1, dgeqrf in the algorithm 2, and dgeqr2ht in the algorithm 3 is in updating of trailing matrix. dgeqr2 uses dgemv operation which is a memory bound operation, and dgeqrf uses dgemm operation which is compute bound operation while dgeqr2ht uses an operation that is more dense in-terms of computations resulting in lower where is a rough quantification of parallelism through DAG based analysis of routines dgeqr2, dgeqrf, and dgeqr2ht. In case of no-change in the computations in the improved routine after re-arrangement of computations (in this case fusing of the loops), translates into ratio of number of levels in the DAG of improved routine to number of levels in the DAG of classical routine.

(a) Performance Comparison of PE with Other Platforms for dgemm and dgeqr2ht
(b) Performance of PE In-terms of Theoretical Peak Performance in PE for dgemm
Fig. 7: Performance of dgeqr2ht in PE

Implementation of dgeqr2ht clearly outperforms implementation of dgeqrf in PE as shown in figure 7(a). In the figure 7(a), the performance is shown in-terms of percentage of theoretical peak performance of PE and also in-terms of percentage of theoretical peak performance normalized to the performance attained by dgemm in the PE. It can be observed in the figure 7(a) that the performance attained by dgeqr2ht is 99.3% of the performance attained by dgemm in the PE. Furthermore, it can be observed in figure 7(b) that the performance achieved in-terms of Gflops/watt in the PE is 35 Gflops/watt compared to performance attained by dgeqrf which is 25 Gflops/watt. In multicore and GPGPUs, such a fusing does not translate into improvement due to limitations of the platform. Further details of dgeqr2ht implementation can be found in [7]. Based on case studies on dgemm, dgeqr2, dgeqrf, and dgeqr2ht, we make following observations:

  • dgeqrf in highly tuned software packages like PLASMA and MAGMA attains 16% and 51% respectively. This leaves a further scope for improvement in these routines through careful analysis

  • Typically, dgeqrf routine that has compute bound operation like dgemm achieves 80-85% of the theoretical peak performance attained by dgemm in multicore and GPGPUs

  • dgeqr2ht attains similar performance as dgeqrf in multicore and GPGPUs while it clearly outperforms dgeqrf in PE

We further propose generalization in CGR presented in [13] and derive GGR that can outperform dgeqr2ht in PE and in REDEFINE.

4 Generalized Givens Rotation and Implementation

From equation 1, matrix , annihilation of element in the last row and first column which is (n,1) would require application of one Givens sequence

(6)

where matrix is a matrix with zero elements in the lower triangle of the matrix and has undergone k-updates. In general Givens matrix is given by equation 7.

(7)

where , and . It takes Givens sequences to annihilate elements in the matrix . There is a possibility to apply multiple Givens sequences to annihilate multiple elements in a column of a matrix. Thus, extending equation 6 to annihilate 2-elements in the first column of the matrix

(8)

where . Extending further equation 8 to annihilate elements of the first column of the input matrix

(9)

where
. Formulation in equation 9 can annihilate elements in the first column of the input matrix . Furthering annihilation of elements to elements that results in matrix where elements in the first column and elements in the second column of matrix are zero as given by equation 10.

(10)

where


. To annihilate elements in the lower triangle of the input matrix , it takes sequences and the Givens matrix shrinks by one row and one column with each column annihilation. Further generalizing equation 10,

Fig. 8: CGR and GGR
(11)

and and . Equation 4 represents GGR in equation form.

Allocate memory for input/output matrices and vectors
for  to  do
     Compute of the column vector
     Update row
     update rows to
end for
Algorithm 4 Generalized Givens Rotation (Pseudo code)

A pictorial view for matrix that compares CGR with GGR is shown in figure 8, and GGR pseudo code is given in algorithm 4. It can be observed in the figure 8 that CGR operates column-wise and takes total iterations to upper traingularize the matrix of size while GGR operates column-wise as well as row-wise and can upper triangularize matrix in a single iteration. It can be observed in the algorithm 4 that the update of the first row and the rest of the rows can be executed in parallel. Furthermore, there also exist parallelism across the out loop iterations in GGR. Theoretically, classical GR takes iterations to upper triangularize an input matrix of size , and CGR takes iterations to upper triangularize an input matrix of size while GGR can upper triangularize a matrix of size in iteration.

However, in practical scenario, it is not possible to accommodate large matrices in the registers or Level 1 (L1) cache memory. Hence, a sophisticated matrix partitioning schemes are required to efficiently exploit the memory hierarchy in multicores and GPGPUs.

4.1 GGR in Multicore and GPGPU

For multiocre realization of GGR, we use PLASMA framework and for GPGPU realization, we use MAGMA framework depicted in figure 5.

4.1.1 GGR in PLASMA

To implement GGR in multicore architectures, we first implement dgeqr2ggr routine in LAPACK and we use that routine in PLASMA. GGR implementation in LAPACK is shown in algorithm 5 as a pseudo code. It can be observed in the algorithm 5 that the most computationally intensive part in GGR is function. In our implementation, function becomes part of BLAS while in LAPACK, we implement function that is wrapper function of function that calls function times for input matrix of size as shown in algorithm 5.

Allocate memory for input/output matrices and vectors
for  to  do
     update(L, A(i,i), A, LDA, I, N, M, Tau(i), Beta)
end for
update()
Initialize ,, and vectors to
Calculate 2-norm of the column vector
Calculate , , and vectors
Update row of the matrix
Update row to using , , and vectors
Algorithm 5 lapack_dgeqr2ggr (GGR in LAPACK) (Pseudo code)

Performance comparison of dgeqr2ggr, dgeqrfggr, dgeqr2, dgeqrf, dgeqr2ht, and dgeqrfht in LAPACK and PLASMA is shown in figure 9. It can be observed in the figure 9 that despite more computations in dgeqr2ggr, the performance of dgeqr2ggr is comparable to dgeqr2, and performance of dgeqrfggr is comparable to dgeqrf in LAPACK and PLASMA. We compare run-time of the different routines normalized to dgemm performance in respective software packages since total number of computations in HT, MHT, and GGR are different. Furthermore, in our implementation of GGR in PLASMA, we use dgemm for updating trailing matrix.

4.1.2 GGR in MAGMA

For implementation of magma_dgeqr2ggr, we insert routines in MAGMA_BLAS. Pseudo code for the inserted routine is shown in algorithm 6.

dnrm2grid, threads,0,queuecuda_stream()(n,m,dC,lddc,dv,ddot)
klvecgrid,threads,0,queuecuda_stream() (n,m,ddot,lddc,dv,dk,dl)
dtmupn,threads,0,queuecuda_stream() (n,m,dC,lddc,ddot,dv,dk,dl)
dnrm2()
for  to  do
     for  to  do
         
         if  then
              
         else if  then
              
         end if
     end for
end for
klvec()
if  then
     dl[row] = 0
     dk[row] = 1/dot[0]
else if  then
     
     
else if  then
     
     
end if
dtmup()
tx = threadIdx.x
for  to  do
     
     
end for
Algorithm 6 magma_dgeqr2ggr (GGR in MAGMA) (Pseudo code)

The implementation consists of functions, that computes 2-norm of the column vector, computation of , and vectors by function and update of trailing matrix by function. It can be observed in the algorithm 6 that the most computationally intensive part in the routine is function. There are two routines implemented, 1) dgeqr2ggr where trailing matrix is updated using method shown in equation 2, 2) dgeqrfggr where trailing matrix is updated using dgemm. Performance of dgeqr2ggr and dgeqrfggr is shown in figure 9. It can be observed in the figure 9 that the performance of dgeqr2ggr is similar to performance of dgeqr2 in MAGMA while performance of dgeqrfggr is similar to the performance of dgeqrf in magma. Despite abundant parallelism available in dgeqr2ggr, GPGPUs are not capable of exploiting this parallelism due to serialization of the routine while in dgeqrfggr, GPGPUs perform similar to dgeqrf due to dominance of dgemm in the routine.

Fig. 9: Performance of GGR in Different Packages and Platforms

4.2 GGR in PE

For implementation of dgeqr2ggr, and dgeqrfggr in PE, we use similar method as presented in [7]. We perform DAG based analysis of GGR and identify macro operations in the DAGs. These macro operations are then realized on RDP that is tightly coupled to PE as shown in figure 10. PE consists of two modules, 1) Floating Point Sequencer (FPS), and 2) Load-Store CFU.

Fig. 10: Processing Element with RDP
Fig. 11: Implementation of dgeqr2, dgeqrf, dgeqrfht, dgeqr2ht, dgeqr2ggr, and dgeqrfggr in PE
Fig. 12: Different Configurations in RDP to Support Implementation of dgeqr2ggr

FPS performs double precision floating point computations while Load-Store CFU is responsible for loading and storing of data in registers, and Local Memory (LM) to/from the Global Memory (GM). Operation in the PE can be defined by following steps:

  • Send a load request to GM for input matrix and store input matrix elements in LM

  • Move input matrix elements to registers in the FPS

  • Perform computations and store the results back to LM

  • Write results back to GM if the computed elements are the final output or if they are not to be used immediately

Up to 90% of overlap in computation and communication is attained in the PE for dgemm as presented in [17]. To identify macro operations, considering example of matrix shown in the figure 2 and equation 2. It can be observed in the figure 2 that computing Givens Generation (GG) is an square root of inner product while computing update of the first row is similar to inner product. Furthermore, computing rows ,, and is determinant of matrix. We map these row updates on RDP as shown in figure 12. It can be observed in the figure 12 that the RDP can be re-morphed to perform scalar multiplication (MUL/DOT1), inner product of 2-element vectors (DOT2), inner product of 3-element vectors, determinant of matrix (DET2), and innter product of 4-element vectors (DOT4) operations. In our implementation we ensure that the reconfiguration of RDP is minimal to improve energy efficiency of the PE. We introduce custom instructions like DOT4, DOT3, DOT2, DOT1, and DET2 instructions in PE to implement these macro operations in RDP along with instruction that can reconfigure RDP. Different routines are realized using these instructions as shown in figure 11. In the figure 11, it can be observed that irrespective of routine for QR factorization implemented in PE, the communication pattern remains consistent across the routines. In our implementation of dgeqr2ggr, we ensure that RDP is configured to perform two DET2 instructions in parallel that maximizes resource utilization of RDP. In our implementation of dgeqr2ggr, instructions in the two function UPDATE_ROW1 and UPDATE are merged such that the pipeline stalls in the RDP are minimized. Such an approach is not possible in implementation of dgeqrfggr or dgeqrfht since trailing matrix is updated using dgemm routine and until the matrix required to update the trailing matrix update is not computed, trailing matrix update can not be processed.

(a) Speed-up in dgeqr2ggr over dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in PE
(b) Performance of dgeqr2ggr, dgeqrfggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in PE In-terms of Theoretical Peak Performance Normalized to Performance of dgemm in the PE
(c) Performance of dgeqr2ggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in PE In-terms of Gflops/watt
(d) Performance Improvement of dgeqr2ggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in PE over Different Platforms
Fig. 13: Performance of dgeqr2ggr in PE

Speed-up in dgeqr2ggr over different routines is shown in figure 13(a). It can be observed in the figure 13(a) that the speed-up attained in dgeqr2ggr over other routines range between 1.1 to 2.25x. Performance in-terms of the percentage of peak performance normalized to the performance attained by dgemm for different routines for QR factorization is shown in figure 13(b). A counter-intuitive observation that can be made here is that dgeqr2ggr can achieve performance that is higher than the performance attained by dgemm in the PE while dgeqr2ht performance reported in [7] is same as performance attained by dgemm. dgeqr2ggr can achieve performance that is up to 82% of the theoretical peak of PE. dgeqr2ggr attains 10% higher Gflops/watt over dgeqr2ht which is the best performing routine as reported in [7] and [19]. Furthermore, improvement in dgeqr2ggr over other platforms is shown in figure 13(d). It can be observed in the figure 13(d) that the performance improvement in PE for dgeqr2ggr over dgeqr2ggr in off-the-shelf platforms is ranging from 3-100x.

5 Parallel Implementation of GGR in REDEFINE

An experimental setup for implementation of dgeqrfggr and dgeqr2ggr is shown in figure 14 where PE that outperforms other off-the-shelf platforms for DLA computations is attached as a CFU to the Routers in REDEFINE CEs.

Fig. 14: Experimental Setup in REDEFINE with PE as a CFU
Fig. 15: Matrix Partitioning and Scheduling on REDEFINE Tile Array
(a) Speed-up in dgeqr2ggr, dgeqrfggr, dgeqr2, dgeqrf, dgeqr2ht, dgeqrfht Routines in REDEFINE over Their Sequential Realization in PE
(b) Performance of dgeqr2ggr, dgeqrfggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in REDEFINE In-terms of Theoretical Peak Performance of REDEFINE
Fig. 16: Performance of dgeqr2ggr in PE

For implementation of dgeqrfggr and dgeqr2ggr in REDEFINE, it requires an efficient partitioning and mapping scheme that can sustain computation to communication ratio that is commensurate with the hardware resources of the platform, and also ensures scalability. We follow similar strategy presented in [7] for realization of dgeqrfggr and dgeqr2ggr in REDEFINE and propose a general scheme that is applicable for the input matrix of any size. For our experiments, we consider different configurations in REDEFINE consisting of , , and Tile arrays. Assuming that input matrix is of the size and REDEFINE Tile array is of size , the input matrix can be partitioned into the blocks of sub-matrices. Since, objective of our experiment is to show scalability of our solution, we choose and such that . Matrix partitioning and REDEFINE mapping is depicted in figure 15. As shown in the figure 15, for Tile array of size , we follow scheme where input matrix is partitioned in to sub-matrices of size . For Tile array of size the input matrix is partitioned in to sub-matrices of size and as the input matrix, the scheme is used to sustain computation to communication ratio. In implementation of dgeqrfggr, we update trailing matrix using dgemm while in implementation of dgeqr2ggr, the trailing matrix is updated using DET2 instructions. Attained speed-up over sequential realization in different Tile array sizes and matrix sizes is shown in figure 16. Speed-up in dgeqr2ggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht over sequential realizations of these routines. It can be observed in the figure 16(a) that the speed-up attained in dgeqr2ggr is commensurate with the Tile array size in REDEFINE. For a Tile array size of , speed-up asymptotically approaches as depicted in the figure 16(a). Performance of dgeqr2ggr, dgeqrfggr, dgeqr2, dgeqrf, dgeqrfht, and dgeqr2ht in-terms of theoretical peak performance of Tile array size is shown in figure 16(b). It can be observed in the figure 16(b) that dgeqrf2ggr can attain up to 78% of the theoretical peak performance in REDEFINE for different Tile array sizes.

6 Conclusion

Generalization of Givens Rotation was presented that resulted in lower multiplication count compared to classical Givens Rotation operation. Generalized Givens Rotation was implemented on multicore and General Purpose Graphics Processing Units where the performance was limited due to inability of these platforms in exploiting available parallelism in the routine. It was proposed to move away from traditional software packages based approach to architectural customizations for Dense Linear Algebra computations. Several macro operations were identified in Generalized Givens Rotation and realized on a Reconfigurable Data-path that is tightly coupled to pipeline of a Processing Element. Generalized Givens Rotation outperformed Modified Householder Transform presented in the literature by 10% in Processing Element where Modified Householder Transform is implemented with similar approach of algorithm-architecture co-design. For parallel realization, the Processing Element was attached to REDEFINE Coarse-grained Reconfigurable Architecture as a Custom Function Unit and scalibility of the solution was shown where speed-up in parallel realization asymptotically approaches Tile array size in REDEFINE.

References

  • [1] G. Cerati, P. Elmer, S. Lantz, I. MacNeill, K. McDermott, D. Riley, M. Tadel, P. Wittich, F. Würthwein, and A. Yagil, “Traditional tracking with kalman filter on parallel architectures,” Journal of Physics: Conference Series, vol. 608, no. 1, p. 012057, 2015. [Online]. Available: http://stacks.iop.org/1742-6596/608/i=1/a=012057
  • [2] F. Merchant, T. Vatwani, A. Chattopadhyay, S. Raha, S. K. Nandy, and R. Narayan, “Achieving efficient realization of kalman filter on CGRA through algorithm-architecture co-design,” CoRR, vol. abs/1802.03650, 2018. [Online]. Available: http://arxiv.org/abs/1802.03650
  • [3] M.-W. Lee, J.-H. Yoon, and J. Park, “High-speed tournament givens rotation-based qr decomposition architecture for mimo receiver,” in Circuits and Systems (ISCAS), 2012 IEEE International Symposium on, may 2012, pp. 21 –24.
  • [4] A. George and J. W. Liu, “Householder reflections versus givens rotations in sparse orthogonal decomposition,” Linear Algebra and its Applications, vol. 88-89, pp. 223 – 238, 1987. [Online]. Available: http://www.sciencedirect.com/science/article/pii/002437958790111X
  • [5] A. Edelman, “Large dense numerical linear algebra in 1993: the parallel computing influence,” IJHPCA, vol. 7, no. 2, pp. 113–128, 1993. [Online]. Available: http://dx.doi.org/10.1177/109434209300700203
  • [6] E. Anderson, Z. Bai, J. Dongarra, A. Greenbaum, A. McKenney, J. Du Croz, S. Hammerling, J. Demmel, C. Bischof, and D. Sorensen, “Lapack: A portable linear algebra library for high-performance computers,” in Proceedings of the 1990 ACM/IEEE Conference on Supercomputing, ser. Supercomputing ’90.   Los Alamitos, CA, USA: IEEE Computer Society Press, 1990, pp. 2–11. [Online]. Available: http://dl.acm.org/citation.cfm?id=110382.110385
  • [7] F. A. Merchant, T. Vatwani, A. Chattopadhyay, S. Raha, S. K. Nandy, and R. Narayan, “Efficient realization of householder transform through algorithm-architecture co-design for acceleration of qr factorization,” IEEE Transactions on Parallel and Distributed Systems, vol. PP, no. 99, pp. 1–1, 2018.
  • [8] J. York and D. Chiou, “On the asymptotic costs of multiplexer-based reconfigurability,” in The 49th Annual Design Automation Conference 2012, DAC ’12, San Francisco, CA, USA, June 3-7, 2012, 2012, pp. 790–795. [Online]. Available: http://doi.acm.org/10.1145/2228360.2228503
  • [9] Z. E. Rákossy, F. Merchant, A. A. Aponte, S. K. Nandy, and A. Chattopadhyay, “Efficient and scalable cgra-based implementation of column-wise givens rotation,” in ASAP, 2014, pp. 188–189.
  • [10] Z. E. Rákossy, F. Merchant, A. A. Aponte, S. K. Nandy, and A. Chattopadhyay, “Scalable and energy-efficient reconfigurable accelerator for column-wise givens rotation,” in 22nd International Conference on Very Large Scale Integration, VLSI-SoC, Playa del Carmen, Mexico, October 6-8, 2014, 2014, pp. 1–6. [Online]. Available: http://dx.doi.org/10.1109/VLSI-SoC.2014.7004166
  • [11] S. Das, K. T. Madhu, M. Krishna, N. Sivanandan, F. Merchant, S. Natarajan, I. Biswas, A. Pulli, S. K. Nandy, and R. Narayan, “A framework for post-silicon realization of arbitrary instruction extensions on reconfigurable data-paths,” Journal of Systems Architecture - Embedded Systems Design, vol. 60, no. 7, pp. 592–614, 2014. [Online]. Available: http://dx.doi.org/10.1016/j.sysarc.2014.06.002
  • [12] G. Ansaloni, P. Bonzini, and L. Pozzi, “Egra: A coarse grained reconfigurable architectural template,” Very Large Scale Integration (VLSI) Systems, IEEE Transactions on, vol. 19, no. 6, pp. 1062–1074, June 2011.
  • [13] F. Merchant, A. Chattopadhyay, G. Garga, S. K. Nandy, R. Narayan, and N. Gopalan, “Efficient QR decomposition using low complexity column-wise givens rotation (CGR),” in 2014 27th International Conference on VLSI Design and 2014 13th International Conference on Embedded Systems, Mumbai, India, January 5-9, 2014, 2014, pp. 258–263. [Online]. Available: https://doi.org/10.1109/VLSID.2014.51
  • [14]

    F. Merchant, N. Choudhary, S. K. Nandy, and R. Narayan, “Efficient realization of table look-up based double precision floating point arithmetic,” in

    29th International Conference on VLSI Design and 15th International Conference on Embedded Systems, VLSID 2016, Kolkata, India, January 4-8, 2016, 2016, pp. 415–420. [Online]. Available: http://dx.doi.org/10.1109/VLSID.2016.113
  • [15] F. Merchant, A. Chattopadhyay, S. Raha, S. K. Nandy, and R. Narayan, “Accelerating BLAS and LAPACK via efficient floating point architecture design,” Parallel Processing Letters, vol. 27, no. 3-4, pp. 1–17, 2017. [Online]. Available: https://doi.org/10.1142/S0129626417500062
  • [16] M. Alle, K. Varadarajan, A. Fell, R. R. C., N. Joseph, S. Das, P. Biswas, J. Chetia, A. Rao, S. K. Nandy, and R. Narayan, “REDEFINE: Runtime reconfigurable polymorphic asic,” ACM Trans. Embed. Comput. Syst., vol. 9, no. 2, pp. 11:1–11:48, Oct. 2009. [Online]. Available: http://doi.acm.org/10.1145/1596543.1596545
  • [17] F. Merchant, A. Maity, M. Mahadurkar, K. Vatwani, I. Munje, M. Krishna, S. Nalesh, N. Gopalan, S. Raha, S. Nandy, and R. Narayan, “Micro-architectural enhancements in distributed memory cgras for lu and qr factorizations,” in VLSI Design (VLSID), 2015 28th International Conference on, Jan 2015, pp. 153–158.
  • [18] M. Mahadurkar, F. Merchant, A. Maity, K. Vatwani, I. Munje, N. Gopalan, S. K. Nandy, and R. Narayan, “Co-exploration of NLA kernels and specification of compute elements in distributed memory cgras,” in XIVth International Conference on Embedded Computer Systems: Architectures, Modeling, and Simulation, SAMOS 2014, Agios Konstantinos, Samos, Greece, July 14-17, 2014, 2014, pp. 225–232. [Online]. Available: http://dx.doi.org/10.1109/SAMOS.2014.6893215
  • [19] F. Merchant, T. Vatwani, A. Chattopadhyay, S. Raha, S. K. Nandy, and R. Narayan, “Achieving efficient QR factorization by algorithm-architecture co-design of householder transformation,” in 29th International Conference on VLSI Design and 15th International Conference on Embedded Systems, VLSID 2016, Kolkata, India, January 4-8, 2016, 2016, pp. 98–103. [Online]. Available: http://dx.doi.org/10.1109/VLSID.2016.109
  • [20] W. Givens, “Computation of plane unitary rotations transforming a general matrix to triangular form,” Journal of the Society for Industrial and Applied Mathematics, vol. 6, no. 1, pp. pp. 26–50, 1958. [Online]. Available: http://www.jstor.org/stable/2098861
  • [21] J. Modi and M. Clarke, “An alternative givens ordering,” Numerische Mathematik, vol. 43, pp. 83–90, 1984. [Online]. Available: http://dx.doi.org/10.1007/BF01389639
  • [22] M. Hofmann and E. J. Kontoghiorghes, “Pipeline givens sequences for computing the qr decomposition on a erew pram,” Parallel Computing, vol. 32, no. 3, pp. 222 – 230, 2006. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0167819105001638
  • [23] E. J. Kontoghiorghes, “Greedy givens algorithms for computing the rank-k updating of the qr decomposition,” Parallel Computing, vol. 28, no. 9, pp. 1257 – 1273, 2002. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0167819102001321
  • [24] S. Aslan, S. Niu, and J. Saniie, “Fpga implementation of fast qr decomposition based on givens rotation,” in Circuits and Systems (MWSCAS), 2012 IEEE 55th International Midwest Symposium on, aug. 2012, pp. 470 –473.
  • [25] L. Ma, K. Dickson, J. McAllister, and J. Mccanny, “Modified givens rotations and their application to matrix inversion,” in Acoustics, Speech and Signal Processing, 2008. ICASSP 2008. IEEE International Conference on, 31 2008-april 4 2008, pp. 1437 –1440.
  • [26] D. Bindel, J. Demmel, W. Kahan, and O. Marques, “On computing givens rotations reliably and efficiently,” ACM Trans. Math. Softw., vol. 28, no. 2, pp. 206–238, Jun. 2002. [Online]. Available: http://doi.acm.org/10.1145/567806.567809
  • [27] X. Wang and M. Leeser, “A truly two-dimensional systolic array fpga implementation of qr decomposition,” ACM Trans. Embed. Comput. Syst., vol. 9, no. 1, pp. 3:1–3:17, Oct. 2009. [Online]. Available: http://doi.acm.org/10.1145/1596532.1596535
  • [28] F. G. V. Zee, R. A. van de Geijn, and G. Quintana-Ortí, “Restructuring the tridiagonal and bidiagonal QR algorithms for performance,” ACM Trans. Math. Softw., vol. 40, no. 3, p. 18, 2014. [Online]. Available: http://doi.acm.org/10.1145/2535371
  • [29] G. H. Golub and C. F. Van Loan, Matrix computations (3rd ed.).   Baltimore, MD, USA: Johns Hopkins University Press, 1996.
  • [30] J. A. Gunnels, C. Lin, G. Morrow, and R. A. van de Geijn, “A flexible class of parallel matrix multiplication algorithms,” in IPPS/SPDP, 1998, pp. 110–116. [Online]. Available: http://dx.doi.org/10.1109/IPPS.1998.669898
  • [31] J. Wasniewski and J. Dongarra, “High performance linear algebra package for FORTRAN 90,” in Applied Parallel Computing, Large Scale Scientific and Industrial Problems, 4th International Workshop, PARA ’98, Umeå, Sweden, June 14-17, 1998, Proceedings, 1998, pp. 579–583. [Online]. Available: http://dx.doi.org/10.1007/BFb0095385