High Accuracy Low Precision QR Factorization and Least Square Solver on GPU with TensorCore

12/11/2019
by   Shaoshuai Zhang, et al.
0

Driven by the insatiable needs to process ever larger amount of data with more complex models, modern computer processors and accelerators are beginning to offer half precision floating point arithmetic support, and extremely optimized special units such as NVIDIA TensorCore on GPU and Google Tensor Processing Unit (TPU) that does half precision matrix-matrix multiplication exceptionally efficiently. In this paper we present a large scale mixed precision linear least square solver that achieves high accuracy using the low precision TensorCore GPU. The mixed precision system consists of both innovative algorithms and implementations, and is shown to be up to 14x faster than single precision cuSOLVER at QR matrix factorization at large scale with slightly lower accuracy, and up to 10x faster than double precision direct QR least square solver with comparable accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset