AMULET: Adaptive Matrix-Multiplication-Like Tasks

05/12/2023
by   Junyoung Kim, et al.
0

Many useful tasks in data science and machine learning applications can be written as simple variations of matrix multiplication. However, users have difficulty performing such tasks as existing matrix/vector libraries support only a limited class of computations hand-tuned for each unique hardware platform. Users can alternatively write the task as a simple nested loop but current compilers are not sophisticated enough to generate fast code for the task written in this way. To address these issues, we extend an open-source compiler to recognize and optimize these matrix multiplication-like tasks. Our framework, called Amulet, uses both database-style and compiler optimization techniques to generate fast code tailored to its execution environment. We show through experiments that Amulet achieves speedups on a variety of matrix multiplication-like tasks compared to existing compilers. For large matrices Amulet typically performs within 15 libraries, while handling a much broader class of computations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/02/2017

Plethysm and fast matrix multiplication

Motivated by the symmetric version of matrix multiplication we study the...
research
08/17/2022

AutoTSMM: An Auto-tuning Framework for Building High-Performance Tall-and-Skinny Matrix-Matrix Multiplication on CPUs

In recent years, general matrix-matrix multiplication with non-regular-s...
research
11/03/2016

Generating Families of Practical Fast Matrix Multiplication Algorithms

Matrix multiplication (GEMM) is a core operation to numerous scientific ...
research
05/15/2023

Fast Matrix Multiplication via Compiler-only Layered Data Reorganization and Intrinsic Lowering

The resurgence of machine learning has increased the demand for high-per...
research
10/12/2021

Extending the R Language with a Scalable Matrix Summarization Operator

Analysts prefer simpler interpreted languages to program their computati...
research
06/17/2022

Maximum Class Separation as Inductive Bias in One Matrix

Maximizing the separation between classes constitutes a well-known induc...
research
11/12/2015

GEMMbench: a framework for reproducible and collaborative benchmarking of matrix multiplication

The generic matrix-matrix multiplication (GEMM) is arguably the most pop...

Please sign up or login with your details

Forgot password? Click here to reset