# Batched computation of the singular value decompositions of order two by the AVX-512 vectorization

In this paper a vectorized algorithm for simultaneously computing up to eight singular value decompositions (SVDs, each of the form A=UΣ V^∗) of real or complex matrices of order two is proposed. The algorithm extends to a batch of matrices of an arbitrary length n, that arises, for example, in the annihilation part of the parallel Kogbetliantz algorithm for the SVD of a square matrix of order 2n. The SVD algorithm for a single matrix of order two is derived first. It scales, in most instances error-free, the input matrix A such that its singular values Σ_ii cannot overflow whenever its elements are finite, and then computes the URV factorization of the scaled matrix, followed by the SVD of a non-negative upper-triangular middle factor. A vector-friendly data layout for the batch is then introduced, where the same-indexed elements of each of the input and the output matrices form vectors, and the algorithm's steps over such vectors are described. The vectorized approach is then shown to be about three times faster than processing each matrix in isolation, while slightly improving accuracy over the straightforward method for the 2× 2 SVD.

## Authors

• 4 publications
07/25/2017

### Computing low-rank approximations of large-scale matrices with the Tensor Network randomized SVD

We propose a new algorithm for the computation of a singular value decom...
03/14/2020

### A Kogbetliantz-type algorithm for the hyperbolic SVD

In this paper a two-sided, parallel Kogbetliantz-type algorithm for the ...
09/28/2010

### Use of multiple singular value decompositions to analyze complex intracellular calcium ion signals

We compare calcium ion signaling (Ca^2+) between two exposures; the data...
09/05/2021

### Efficient computation of matrix-vector products with full observation weighting matrices in data assimilation

Recent studies have demonstrated improved skill in numerical weather pre...
09/04/2019

### Automatic Differentiation for Complex Valued SVD

In this note, we report the back propagation formula for complex valued ...
03/10/2020

### Error Estimation for Sketched SVD via the Bootstrap

In order to compute fast approximations to the singular value decomposit...
08/31/2019

### Implicit Hari–Zimmermann algorithm for the generalized SVD on the GPUs

A parallel, blocked, one-sided Hari–Zimmermann algorithm for the general...

## Code Repositories

### VecKog

The vectorized (AVX-512) batched singular value decomposition algorithm for matrices of order two.

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Let a finite sequence , where , of complex matrices be given, and let the corresponding sequences , of unitary matrices be sought for, as well as a sequence of diagonal matrices with the real and non-negative diagonal elements, such that , i.e., for each , the right hand side of the equation is the singular value decomposition (SVD) of the left hand side. This batch of SVD computational tasks arises naturally in, e.g., parallelization of the Kogbetliantz algorithm [1] for the SVD [2, 3, 4]. A parallel step of the algorithm, repeated until convergence, amounts to forming and processing such a batch, with each assembled column by column from the elements of the iteration matrix at the suitably chosen pivot positions , , , and . The iteration matrix is then updated from the left by and from the right by , transforming the th and the th rows and columns, respectively, while annihilating the off-diagonal pivot positions.

For each , the matrices , , , and have the following elements,

where . When its actual index is either implied or irrelevant, is denoted by . Similarly, , , and denote , , and , respectively, in such a case, and the bracketed indices of the particular elements are also omitted.

When computing in the machine’s floating-point arithmetic, the real and the imaginary parts of the input elements are assumed to be rounded to finite (i.e., excluding

and NaN) double precision quantities, but the SVD computations can similarly be vectorized in single precision (float datatype in the C language [5]).

Let C and W denote the CPU’s cache line size and the maximal SIMD width, both expressed in bytes, respectively. For an Intel CPU with the 512-bit Advanced Vector Extensions Foundation (AVX-512F) instruction set [6], . Let B be the size in bytes of the chosen underlying datatype T (here, in the real and the complex case alike, so ), and let . If , let , else let .

This paper aims to show how to single-threadedly compute as many SVDs at the same time as there are the SIMD/vector lanes available (S), one SVD by each lane. Furthermore, these vectorized computations can execute concurrently on the non-overlapping batch chunks assigned to the multiple CPU cores.

Techniques similar to the ones proposed in this paper have already been applied in [7] for vectorization of the Hari–Zimmermann joint diagonalizations of a complex positive definite pair of matrices [8] of order two, and could be, as future work, for the real variant of the Hari–Zimmermann algorithm for the generalized eigendecomposition [9] and the generalized SVD [10]. Those efforts do not use the C compiler intrinsics, but rely instead on the vectorization capabilities of the Intel Fortran compiler over data laid out in a vector-friendly fashion similar to the one described in section 3. Simple as it may seem, it is also a more fragile way of expressing the vector operations, should the compiler ever renegade on the present behavior of its autovectorizer. The intrinsics approach has been tried in [11] with 256-bit-wide vectors of the AVX2+FMA [6] instruction set, alongside AVX-512F, for vectorization of the eigendecompositions of symmetric matrices of order two by the Jacobi rotations computed similarly to [12]. This way the one-sided Jacobi SVD (and, similarly, the hyperbolic SVD) of real matrices can be significantly sped up when is small enough to make the eigendecompositions’ execution time comparable to the Grammian formations and the column updates, e.g., when the targeted matrices are the block pivots formed in a block-Jacobi algorithm [13, 14].

In numerical linear algebra the term “batched computation” is well-established, signifying a simultaneous processing of a large quantity of relatively small problems, e.g., the LU and the Cholesky factorizations [15] and the corresponding linear system solving [16] on the GPUs, with appropriate data layouts. It is therefore both justifiable and convenient to reuse the term in the present context.

This paper is organized as follows. A non-vectorized Kogbetliantz method for the SVD of a matrix of order two is presented in section 2. In section 3 a vector-friendly data layout is proposed, followed by a summary of the vectorized algorithm for the batched SVDs in section 4. The algorithm comprises the following phases:

1. scaling the input matrices , each by a suitable power of two, to avoid any overflows and most underflows, vectorized in section 5 and based on subsection 2.1,

2. the URV factorizations [17], vectorized in section 6 and based on subsection 2.2, of the matrices from the previous phase, into the real, non-negative, upper-triangular middle factors and the unitary left and right factors,

3. the singular value decompositions of the factors from the previous phase, vectorized in section 7 and based on subsection 2.3, yielding the scaled , and

4. assembling of the left () and the right () singular vectors of the matrices .

The numerical testing results follow in section 8, and the conclusions in section 9.

## 2 The Kogbetliantz algorithm for the SVD of order two

The pointwise, non-vectorized Kogbetliantz algorithm for the SVD of a matrix of order two has been an active subject of research [18, 19, 20], and has been implemented for real matrices in LAPACK’s [21] xLASV2 (for the full SVD) and xLAS2 (for the singular values only) routines, where . Here a simplified version of the algorithm from [4, trigonometric case] is described, with an early reduction of a complex matrix to the real one that is partly influenced by, but improves on, [22].

It is assumed in the paper that the floating-point arithmetic [23] is nonstop, i.e., does not trap on exceptions, and has the gradual underflow, i.e., Flush-denormals-To-Zero (FTZ) and Denormals-Are-Zero (DAZ) processor flags [6] are disabled.

To compute and for a complex with both components finite, including , while avoiding the complex arithmetic operations, use the function [5]. With DBL_TRUE_MIN being the smallest positive non-zero (and thus subnormal, or denormal in the old parlance) double precision value, let

 |z| =hypot(Rz,Iz),eiargz=cos(argz)+i⋅sin(argz), (0) cos(argz) =fmin(|Rz||z|,1)⋅signRz,sin(argz)=Izmax(|z|,DBL_TRUE_MIN).

Here and in the following, and are the functions similar to the ones in the C language [5], but with a bit relaxed semantics, that return the minimal (respectively, maximal) of their two non-NaN arguments, or the second argument if the first is a NaN, as it is the case with the vector minimum and maximum [6, VMINPD and VMAXPD]. See also [7, subsection 6.2] for a similar exploitation of the NaN handling of and operations. It now follows that, when , and so ,

 cos(argz)=signRz=±1,sin(argz)=0⋅signIz=±0.

The signs of and are thus preserved in and , respectively.

A mandatory property of for (2) to be applicable to all inputs , where and are of sufficiently small magnitude (see subsection 2.1), is to have

 hypot(a,b)=0⟺a=b=0.

A vectorized implementation is accessible from the Intel Short Vector Math Library (SVML) via a compiler intrinsic, as well as it is a reciprocal square root () vectorized routine, helpful for the cosine calculations in (3), (11), and (12), though neither is always correctly rounded to at most half ulpaaaConsult the reports on the High Accuracy functions at https://software.intel.com/content/www/us/en/develop/documentation/mkl-vmperfdata/top/real-functions/root.html URL..

### 2.1 Exact scalings of the input matrix

Even if both components of are finite, from (2) can overflow, but cannot. Scaling a floating-point number by an integer power of two is exact, except when the significand of a subnormal result loses a trailing non-zero part due to shifting of the original significand to the left, or when the result overflows. Therefore, such scaling [6, VSCALEFPD] is the best remedy for the absolute value overflow problem.

Let the exponent of a floating-point value (assuming the radix two) be defined as and for a finite non-zero (see [6, VGETEXPPD]). Let be two less than the largest exponent of a finite double precision number. To find a scaling factor for , take as

 s=min{DBL_MAX,minER,minEI}, (-1)

where and are computed as

 ERij=h−exp2Raij,EIij=h−exp2Iaij.

Note that , due to the definition of . If is real, is not used. The upper bound on is required to be finite, since would result in a NaN.

If there is a value of a huge magnitude (i.e., with its exponent greater than ) in , from (-1) will be negative and the huge values will decrease, either twofold or fourfold. Else, will be the maximal non-negative amount by which the exponents of the values in can jointly be increased, thus taking the very small values out of the subnormal range if possible, without any of the new exponents going over .

Let , and let denote the th column of . The Frobenius norm of ,

 ∥^aj∥F=hypot(|^a1j|,|^a2j|). (-1)

cannot overflow (see (18) in the proof of Theorem 2.3 in subsection 2.3).

This scaling is both a simplification and an improvement of [4, subsection 2.3.2], which guarantees that the computed scaled singular values are finite, while avoiding any branching, lane masking, or recomputing when vectorized, with the only adverse effect being a potential sacrifice of the tiniest subnormal values in the presence of a huge one (i.e., with its exponent strictly greater than ) in .

### 2.2 The URV factorization of a well-scaled matrix

If , let , else let . Denote the column-pivoted by . If , let , else let . Denote the row-sorted by . To make real and non-negative, let

 D∗=[e−iarga′′1100e−iarga′′21],A′′′=D∗A′′. (0)

Complex multiplication, required in (2.2), (4), (5), and (7), is performed using the fused multiply-add operations with a single rounding [5], , as in [24, cuComplex.h] and [25, subsection 3.2.1], i.e., for a complex holds

 Rc=fma(Ra,Rb,−Ia⋅Ib),Ic=fma(Ra,Ib,Ia⋅Rb). (1)

To annihilate , compute the Givens rotation , where , as

 (2)

Since the column norms of a well-scaled are finite, its column-pivoted, row-sorted QR factorization in (2) cannot result in an infinite element in .

If then as a special case. Handling special cases in a vectorized way is difficult as it implies branching or using the instructions with a lane mask. However, function aids in avoiding both of these approaches similarly to in (2), since and from (2) can be computed as

 tanα=−fmax(a′′′21/a′′′11,0),cosα=invsqrt(fma(tanα,tanα,1)). (3)

To make from (1) real (see [22]) and non-negative, take and obtain as

 ˜D=[100e−iargr′12],R′=R′′˜D=[r11r120r′22],r12≥0. (4)

Similarly, to make from (4) real and non-negative, take and obtain as

 ˆD∗=[100e−iargr′22],R=ˆD∗R′=[r11r120r22],r11≥max{r12,r22}, (5)

due to the column pivoting. Specifically, if is already real, then

 D∗=[signa′′1100signa′′21],˜D=[100signr′12],ˆD∗=[100signr′22]. (6)

Note that, from (2.2), (2), (4), and (5),

 R=U∗+ˆAV+,U∗+=ˆD∗Q∗αD∗P∗r,V+=Pc˜D, (7)

where and are unitary, i.e., is a specific URV factorization [17] of .

### 2.3 The SVD of a special upper-triangular non-negative matrix

Here the plane rotations and are computed, such that , where

 U∗φ=cosφ[1−tanφtanφ1],Vψ=cosψ[1tanψ−tanψ1],Σ′=[σ′1100σ′22], (8)

with from (5) and .

Let, as in [4, subsection 2.2.1], where the following formulas have been derived,

 x=fmax(r12/r11,0),y=fmax(r22/r11,0). (9)

With and from (9), , compute

 tan(2φ)=−min{fmax((2min(x,y))max(x,y)fma(x−y,x+y,1),0),√DBL_MAX}, (10)

as justified in the next paragraph.

Since the quotient in (10) is non-negative (when defined), is non-positive, and thus . From compute

 tanφ=tan(2φ)1+√fma(tan(2φ),tan(2φ),1),cosφ=invsqrt(sec2φ), (11)

with and . Assume that was not bounded in magnitude. If in floating-point (this occurs rarely, when , , and ), then instead of the correct result, . Else, if , adding one to its square would have made little difference before the rounding (and the sum would have overflown after it), so the square root in (11) could be approximated by . Again, with so obtained, adding one to it in the denominator in (11) would have been irrelevant, and would have then equaled to . Bounding from above as in (10) therefore avoids the argument of the square root overflowing (so using instead of is not required), and ensures for all that would otherwise be greater than the bound.

Having thus computed , the right plane rotation is constructed from

 (12)

where and .

The following Theorem 2.3 shows that the special form of contributes to an important property of the computed scaled singular values ; namely, they are already sorted non-ascendingly, and thus never have to be swapped in a postprocessing step. Also, the scaled singular values are always finite in floating-point arithmetic. For it holds , where

 σ′11=(cosφcosψsec2ψ)r11,σ′22=(cosφcosψsec2φ)r22, (13)

and . Assume in (5), i.e., in (9). Else, from (9), (10), and (12), both tangents are zero and both cosines are unity, thus from (8) and (5) follows , as claimed, and only remains to be proven.

From (11) , and from (12) , so . Scaling by and by in (8), and by in (5), from (9) follows

 [1−tanφtanφ1][1x0y][1tanψ−tanψ1]=[σ′′1100σ′′22]. (14)

Multiplying the matrices on the left hand side of (14) and equating the elements of the result with the corresponding elements of the right hand side, one obtains

 σ′′11=tan2ψ+1=sec2ψ,σ′′22=(tan2φ+1)y=(sec2φ)y, (15)

after an algebraic simplification using the relation (12) for . The equations for and from (13) then follow by multiplying the equations for and from (15), respectively, by . Specially, , since would imply an obvious contradiction with the assumption.

If , from (15) and (13) it follows , and, due to (10), (11), and (12), it holds . If then from (5) and (9), contrary to the assumption. Therefore, in the following, and let

 a=−2xy<0,b=1+x2−y2>0,c=b+√a2+b2>0. (16)

Then, rewrite from (10) using (16) as

 tan(2φ)=ab,tan2(2φ)+1=a2+b2b2, (17)

as well as from (11) using (17) and (16) as

 tanφ=tan(2φ)1+√tan2(2φ)+1=a/b(b+√a2+b2)/b=ab+√a2+b2=ac. (18)

From (18), , what gives with (9) and (12). Taking the ratio of these two absolute values, it has to be proven that

 |tanψ||tanφ|=|a|y+cx|a|≥1. (19)

Expanding (19) using (16), it follows

 |a|y|a|+cx|a|=y+(1+x2−y2+√a2+b2)x2xy=1+x2+y2+√a2+b22y, (20)

where the argument of the square root can be expressed as

 a2+b2=(1+x2)2+2(x2−1)y2+y4, (21)

after substitution of (16) for and and a subsequent algebraic simplification. For a fixed but arbitrary , (21), and thus the numerator of (20), decrease monotonically as . Substituting zero for in (20) and (21), the former becomes

 1+y2+√(1−y2)22y=22y=1y>1,

what proves the inequality between the tangents.

The inequality between the scaled singular values follows easily from (15) as

 σ′22σ′11=σ′′22σ′′11=tan2φ+1tan2ψ+1y<1,

since and . It remains to be shown that for all from (5). If has not been computed (what is never the case in the proposed method) from a well-scaled (see subsection 2.1), then even can overflow.

Observe that from (10), and . From (13),

 σ′11r11=cosφcosψ≤1cosψ. (20)

Since, from (11) and (12),

 |tanψ|=y|tanφ|+x≤y+x,cosψ=1/√1+tan2ψ,

has to be bounded from above, to be able to bound from below, and thus (20) from above. From (5) and the column pivoting goal, , what gives after dividing by , i.e., and are contained in the intersection of the first quadrant and the unit disc. On this domain, attains the maximal value of for , so , as claimed, and thus . Substituting this lower bound for in (20), it follows .

From subsection 2.1 and (2), since

 max1≤i,j≤2{|R^aij|,|I^aij|}<2h+1,

it can be concluded that

 max1≤i,j≤2|^aij|<√(2h+1)2+(2h+1)2=√2⋅2h+1,

and therefore

 r11=max1≤j≤2∥^aj∥F<√(√2⋅2h+1)2+(√2⋅2h+1)2=2⋅2h+1=2h+2, (18)

so , where the right hand side is the immediate successor (that represents ) of the largest finite floating-point number, as claimed.

Using Theorem 2.3, from (8) and (7) then follows

 Σ=2−sΣ′,U∗=U∗φU∗+,V=V+Vψ,U=(U∗)∗, (19)

where has to be backscaled to obtain the singular values of the original input matrix. However, the backscaling should be skipped if it would cause the singular values to overflow or (less catastrophic but still inaccurate outcome) underflow, while informing the user of such an event by preserving the value of .

Specifically, by matrix multiplication, from (19) and (7) it follows

 U =cosαcosφPr[d11(1−^d22tanαtanφ)d11(tanφ+^d22tanα)−d22(tanα+^d22tanφ)d22(^d22−tanαtanφ)], (20) V =cosψPc[1tanψ−~d22tanψ~d22].

Computing each element of a complex requires only one complex multiplication.

Writing a complex number as , noting that , , and are the complex conjugates of , , and , respectively, and precomputing and , (20) can be expanded as, e.g.,

 (P∗rU)11 =−c(d11⋅(fma(R^d22,t,1),I^d22t)), (P∗rU)21 =−c(d22⋅(fma(R^d22,tanφ,tanα),I^d22tanφ)), (P∗rU)12 =−c(d11⋅(fma(R^d22,tanα,tanφ),I^d22tanα)), (P∗rU)22 =−c(d22⋅(fma(−tanα,tanφ,R^d22),I^d22)),

but another mathematically equivalent computation that minimizes the number of roundings required for forming the elements of as this one does is also valid.

## 3 Vector-friendly data layout

Vectors replace scalars in the SIMD arithmetic operations. A vector should hold S elements from the same matrix sequence, with the same row and column indices, and the consecutive bracketed indices. When computing with complex numbers, however, it is more efficient to keep the real and the imaginary parts of the elements in separate vectors, since there are no hardware vector instructions for the complex multiplication and division, e.g., which thus have to be implemented manually. Also, a vector should be aligned in memory to a multiple of W bytes to employ the most efficient versions of the vector load/store operations. It is therefore essential to establish a vector-friendly layout for the matrix sequences , , , and in the linear memory space. One such layout, inspired by splitting the real and the complex parts of the matrix elements into separate vectors [7, subsection 6.2], is

 Raij=\framebox[23.399643pt][c]{Ra[1]ij}% \framebox[23.399643pt][c]{Ra[2]ij}\framebox[23.399643pt][c]{Ra[^n]ij⋯}\framebox[23.399643pt][c]{Ra[^n]ij},Iaij=\framebox[23.399643pt][c]{% Ia[1]ij}\framebox[23.399643pt][c]{Ia[2]ij}\framebox[23% .399643pt][c]{Ia[^n]ij⋯}\framebox[23.399643pt% ][c]{Ia[^n]ij}.

where and (similarly, , , and , ) for are the sequences of the real and the imaginary components, respectively, of the elements in the th row and the th column of the matrices in (similarly, in and in ).

Each train of boxes represents a contiguous region of memory aligned to W bytes. In the real case, no -boxes exist, but the layout otherwise stays the same. The scaled singular values and from (8) are stored as

 σ′max=\framebox[20.099693pt][c]{σ′[1]minσ′[1]max}\framebox[20.099693pt][c]{σ′[2]minσ′[2]max}\framebox[20.09% 9693pt][c]{σ′[^n]min⋯}\framebox[20.099% 693pt][c]{σ′[^n]minσ′[^n]max},σ′min=\framebox[20.099693pt][c% ]{σ′[1]min}\framebox[20.099693pt][c]{σ′[2]min}\framebox[20.099693pt][c]{σ′[^n]min⋯}\framebox[20.099693pt][c]{σ′[^n]min},

respectively, while the scaling parameters from (-1) are laid out as

 s=\framebox[21.899666pt][c]{s[1]}\framebox[21.899666pt][c]% {s[2]}\framebox[21.899666pt][c]{s[^n]⋯}% \framebox[21.899666pt][c]{s[^n]}.

The virtual elements, with their bracketed indices ranging from to

, serve if present as a (e.g., zero) padding, which ensures that all vectors, including the last one, formed from the consecutive elements of a box train, hold the same maximal number (

S) of defined values and can thus be processed in an uniform manner.

The input sequence may initially be in another layout and has to be repacked before any further computation. Also, the output sequences , , , and may have to be repacked for a further processing. Such reshufflings should be avoided, as they incur a substantial overhead in both time and memory requirements.

Layout of data, including the intermediate results, in vector registers during the computation is the same as it is for the box trains, but with S elements instead of . The th vector, for , encompasses the consecutive indices ,

 (v−1)⋅S+1≤k≤v⋅S. (17)

A vector is loaded into, kept in, and stored from, a variable of the C type __m512d.

In the following, a bold lowercase letter stands for a vector, and the uppercase one for a (logical, not necessarily in-memory) matrix sequence. For example, is a sequence of matrices , of which is a subsequence of length S, and is a vector containing elements of , for some and its corresponding indices from (17). A bold constant denotes a vector with all its values being equal to the given constant. An arithmetic operation on vectors (or collections thereof) or matrix sequences is a shorthand for a sequence of the elementwise operations; e.g.,

 2−s=(2−s[k])k,BC=(B[k]C[k])k,1≤k≤^n.

where and are any two matrix sequences, is a product of matrices of order two, and is a collection of vectors with all their values equal to two. All bracketed indices are one-based, as it is customary in linear algebra, but in the C code they are zero-based, being thus one less than they are in the paper’s text.

## 4 Overview of the algorithm for the batched SVDs of order two

When there are two cases, the real and the complex one, all code-presenting figures cover the latter with a mixture of the actual statements and a mathematically oriented pseudocode. The real-case differences are described in them in the comments starting with . A function name in uppercase, NAME, is a shorthand for the _mm512_name_pd C compiler intrinsic, if the operation is available in the machine’s instruction set, or for an equivalent sequence of the bit-pattern preserving casts to and from the integer vectors and an equivalent integer NAME operation. More precisely, for bitwise operations, if the AVX-512DQ instruction set extensions are not supported, an exception to the naming rule holds:

 {NAME}(x,y) ={\_mm512\_castsi512\_pd}({\_mm512% \_\emph{name}\_epi64}(^x,^y)), ^x =_mm512_castpd_si512(x),^y=_mm512_castpd_si512(y).

All other required operations have a _pd variant (with double precision vector lanes) in the core AVX-512F instruction set, so it suffices for implementing the entire algorithm. Additionally, let CMPLT_MASK stand for the _mm512_cmplt_pd_mask intrinsic, i.e., for the less-than lane-wise comparison of two vectors.

The four phases of the algorithm for the batched SVDs of order two, as listed in section 1, can be succinctly depicted by the following logical execution pipeline,

 A⟶ˆA⟶A′⟶A′′⟶A′′′⟶R′′⟶R′⟶R⟶Σ′{not}−−−→safeΣ↓↓↓↓↓↓↓↓scPc∗rP∗r∗D∗∗αQ∗α˜D∗ˆD∗U∗φ,Vψ→U,VU=PrDQαˆDUφ,V=Pc˜DVψ;Σ=2−sΣ′,

where the first row shows the transformations of , the second row contains the various matrix sequences that are the “by-products” of the computation, described in section -1, ending with the sequences of the left and the right singular vectors, that are formed as indicated in the third row. As the singular values can overflow due to the backscaling (see subsection 2.3) of the scaled ones (), computing them unconditionally is unsafe, and such postprocessing is left to the user’s discretion. In certain use-cases it might be known in advance that the singular values cannot overflow/underflow, e.g., if the initial matrices have already been well-scaled at their formation. The backscaling, performed as in Fig. 1, is then unconditionally safe.

The pipeline is executed independently on each non-overlapping subsequence of S consecutive matrices. If there are more such sequences than the active threads, at a point in time some sequences might have already been processed, while the others are still waiting, either for the start or the completion of the processing. A conceptual core of a driver routine implementing such a pass over the data is shown in Fig. 2, where xSsvd2, , are the main (real or complex, respectively), single-threaded routines that are responsible for all vectorized computations on each particular sequence of size S. The OpenMP [26] parallel for directive in Fig. 2 assumes a user-defined maximal number and placement/affinity of threads.

The input arguments of xSsvd2 are (the pointers to) the arrays, each aligned to W bytes, of S double values, e.g., const double A12r[static S] for . The output arguments are similar, e.g., double U21i[static S] for . Note that the same interface, up to replacing S by 1, would be applicable to the pointwise x1svd2 routine for a single SVD, but without the implied alignment restriction.

No branching is involved explicitly in the xSsvd2 routines. It is therefore fully branch-free, if the used SVML routines are. All data, once loaded from memory or computed, is intended to be held in the zmm vector registers until the output has been formed and written back to RAM. This goal is almost achievable in the test setting, since there are two vector register spillages, with a total of only four extra memory accesses (two writes and two reads), as reported by the optimizer. A hand-tuned self-contained assembly might do away with these as well.

The first three phases of the algorithm are vectorized as described in sections 5, 6, and 7, respectively, since each of the phases can be viewed as an algorithm on its own. They are, however, chained by the dataflow, each having as its input the output of the previous one. Should the output of a phase be made available alongside the final results, it could be written to an additional memory buffer in the same layout as presented in section 3. Otherwise, the intermediate results are not preserved.

Vectorization of the last, fourth phase of the algorithm from (20) is as tedious and uninformative as it is straightforward, and so it is omitted for brevity. It suffices to say that (and ) and (and ), for , are computed from (20), using the operation where possible, and (1) for the complex multiplications. The final row permutations by or are performed in the same way as the row swaps in the URV factorization phase, described in Fig. 6 in section 6. An interested reader is referred to the actual code in the supplementary materialbbbSupplementary material is available in https://github.com/venovako/VecKog repository..

## 5 Vectorized exact scalings of the input matrices

Computation of the scaling parameters is remarkably simple, as shown in Fig. 3.