Performance Engineering for Real and Complex Tall Skinny Matrix Multiplication Kernels on GPUs

05/08/2019
by   Dominik Ernst, et al.
0

General matrix-matrix multiplications with double-precision real and complex entries (DGEMM and ZGEMM) in vendor-supplied BLAS libraries are best optimized for square matrices but often show bad performance for tall skinny matrices, which are much taller than wide. NVIDIA's current CUBLAS implementation delivers only a fraction of the potential performance as indicated by the roofline model in this case. We describe the challenges and key characteristics of an implementation that can achieve close to optimal performance. We further evaluate different strategies of parallelization and thread distribution, and devise a flexible, configurable mapping scheme. To ensure flexibility and allow for highly tailored implementations we use code generation combined with autotuning. For a large range of matrix sizes in the domain of interest we achieve at least 2/3 of the roofline performance and often substantially outperform state-of-the art CUBLAS results on an NVIDIA Volta GPGPU.

READ FULL TEXT
research
05/08/2019

Performance Engineering for a Tall Skinny Matrix Multiplication Kernel on GPUs

General matrix-matrix multiplications (GEMM) in vendor-supplied BLAS lib...
research
07/12/2023

Acceleration of complex matrix multiplication using arbitrary precision floating-point arithmetic

Efficient multiple precision linear numerical computation libraries such...
research
05/03/2016

Implementing Strassen's Algorithm with BLIS

We dispel with "street wisdom" regarding the practical implementation of...
research
06/15/2022

OpSparse: a Highly Optimized Framework for Sparse General Matrix Multiplication on GPUs

Sparse general matrix multiplication (SpGEMM) is an important and expens...
research
04/12/2023

MEMA Runtime Framework: Minimizing External Memory Accesses for TinyML on Microcontrollers

We present the MEMA framework for the easy and quick derivation of effic...
research
09/08/2019

The surrogate matrix methodology: A reference implementation for low-cost assembly in isogeometric analysis

A reference implementation of a new method in isogeometric analysis (IGA...
research
01/17/2019

Supporting mixed-datatype matrix multiplication within the BLIS framework

We approach the problem of implementing mixed-datatype support within th...

Please sign up or login with your details

Forgot password? Click here to reset