Communication-avoiding Cholesky-QR2 for rectangular matrices
The need for scalable algorithms to solve least squares and eigenvalue problems is becoming increasingly important given the rising complexity of modern machines. We address this concern by presenting a new scalable QR factorization algorithm intended to accelerate these problems for rectangular matrices. Our contribution is a communication-avoiding distributed-memory parallelization of an existing Cholesky-based QR factorization algorithm called CholeskyQR2. Our algorithm exploits a tunable processor grid able to interpolate between one and three dimensions, resulting in tradeoffs in the asymptotic costs of synchronization, horizontal bandwidth, flop count, and memory footprint. It improves the communication cost complexity with respect to state-of-the-art parallel QR implementations by Θ(P^1/6). Further, we provide implementation details and performance results on Blue Waters supercomputer. We show that the costs attained are asymptotically equivalent to other communication-avoiding QR factorization algorithms and demonstrate that our algorithm is efficient in practice.
READ FULL TEXT