Generating Families of Practical Fast Matrix Multiplication Algorithms

11/03/2016
by   Jianyu Huang, et al.
0

Matrix multiplication (GEMM) is a core operation to numerous scientific applications. Traditional implementations of Strassen-like fast matrix multiplication (FMM) algorithms often do not perform well except for very large matrix sizes, due to the increased cost of memory movement, which is particularly noticeable for non-square matrices. Such implementations also require considerable workspace and modifications to the standard BLAS interface. We propose a code generator framework to automatically implement a large family of FMM algorithms suitable for multiplications of arbitrary matrix sizes and shapes. By representing FMM with a triple of matrices [U,V,W] that capture the linear combinations of submatrices that are formed, we can use the Kronecker product to define a multi-level representation of Strassen-like algorithms. Incorporating the matrix additions that must be performed for Strassen-like algorithms into the inherent packing and micro-kernel operations inside GEMM avoids extra workspace and reduces the cost of memory movement. Adopting the same loop structures as high-performance GEMM implementations allows parallelization of all FMM algorithms with simple but efficient data parallelism without the overhead of task parallelism. We present a simple performance model for general FMM algorithms and compare actual performance of 20+ FMM algorithms to modeled predictions. Our implementations demonstrate a performance benefit over conventional GEMM on single core and multi-core systems. This study shows that Strassen-like fast matrix multiplication can be incorporated into libraries for practical use.

READ FULL TEXT

page 9

page 10

page 15

research
05/03/2016

Implementing Strassen's Algorithm with BLIS

We dispel with "street wisdom" regarding the practical implementation of...
research
05/15/2023

Fast Matrix Multiplication via Compiler-only Layered Data Reorganization and Intrinsic Lowering

The resurgence of machine learning has increased the demand for high-per...
research
02/16/2023

GEMMFIP: Unifying GEMM in BLIS

Matrix libraries often focus on achieving high performance for problems ...
research
01/31/2022

Overhead Management in Multi-Core Environment

In multi-core systems, various factors like inter-process communication,...
research
08/11/2022

Optimizing Irregular-Shaped Matrix-Matrix Multiplication on Multi-Core DSPs

General Matrix Multiplication (GEMM) has a wide range of applications in...
research
05/12/2023

AMULET: Adaptive Matrix-Multiplication-Like Tasks

Many useful tasks in data science and machine learning applications can ...
research
08/24/2018

Implementing Strassen's Algorithm with CUTLASS on NVIDIA Volta GPUs

Conventional GPU implementations of Strassen's algorithm (Strassen) typi...

Please sign up or login with your details

Forgot password? Click here to reset