Memory Safe Computations with XLA Compiler

06/28/2022
by   Artem Artemev, et al.
0

Software packages like TensorFlow and PyTorch are designed to support linear algebra operations, and their speed and usability determine their success. However, by prioritising speed, they often neglect memory requirements. As a consequence, the implementations of memory-intensive algorithms that are convenient in terms of software design can often not be run for large problems due to memory overflows. Memory-efficient solutions require complex programming approaches with significant logic outside the computational framework. This impairs the adoption and use of such algorithms. To address this, we developed an XLA compiler extension that adjusts the computational data-flow representation of an algorithm according to a user-specified memory limit. We show that k-nearest neighbour and sparse Gaussian process regression methods can be run at a much larger scale on a single device, where standard implementations would have failed. Our approach leads to better use of hardware resources. We believe that further focus on removing memory constraints at a compiler level will widen the range of machine learning methods that can be developed in the future.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2019

TapirXLA: Embedding Fork-Join Parallelism into the XLA Compiler in TensorFlow Using Tapir

This work introduces TapirXLA, a replacement for TensorFlow's XLA compil...
research
11/23/2012

Theano: new features and speed improvements

Theano is a linear algebra compiler that optimizes a user's symbolically...
research
07/01/2020

Is Rust Used Safely by Software Developers?

Rust, an emerging programming language with explosive growth, provides a...
research
09/06/2023

An Evaluation of Software Sketches

This work presents a detailed evaluation of Rust (software) implementati...
research
03/05/2020

Predicting Memory Compiler Performance Outputs using Feed-Forward Neural Networks

Typical semiconductor chips include thousands of mostly small memories. ...
research
03/09/2021

DISC: A Dynamic Shape Compiler for Machine Learning Workloads

Many recent machine learning models show dynamic shape characteristics. ...
research
02/02/2022

Efficient Memory Partitioning in Software Defined Hardware

As programmers turn to software-defined hardware (SDH) to maintain a hig...

Please sign up or login with your details

Forgot password? Click here to reset