Buddy Compression: Enabling Larger Memory for Deep Learning and HPC Workloads on GPUs

03/06/2019
by   Esha Choukse, et al.
0

GPUs offer orders-of-magnitude higher memory bandwidth than traditional CPU-only systems. However, GPU device memory tends to be relatively small and the memory capacity can not be increased by the user. This paper describes Buddy Compression, a scheme to increase both the effective GPU memory capacity and bandwidth while avoiding the downsides of conventional memory-expanding strategies. Buddy Compression compresses GPU memory, splitting each compressed memory entry between high-speed device memory and a slower-but-larger disaggregated memory pool (or system memory). Highly-compressible memory entries can thus be accessed completely from device memory, while incompressible entries source their data using both on and off-device accesses. Increasing the effective GPU memory capacity enables us to run larger-memory-footprint HPC workloads and larger batch-sizes or models for DL workloads than current memory capacities would allow. We show that our solution achieves an average compression ratio of 2.2x on HPC workloads and 1.5x on DL workloads, with a slowdown of just 1 2

READ FULL TEXT

page 1

page 5

page 8

page 10

research
04/05/2021

GPU Domain Specialization via Composable On-Package Architecture

As GPUs scale their low precision matrix math throughput to boost deep l...
research
02/18/2019

Beyond the Memory Wall: A Case for Memory-centric HPC System for Deep Learning

As the models and the datasets to train deep learning (DL) models scale,...
research
08/08/2019

TensorDIMM: A Practical Near-Memory Processing Architecture for Embeddings and Tensor Operations in Deep Learning

Recent studies from several hyperscalars pinpoint to embedding layers as...
research
10/22/2019

The Bitlet Model: Defining a Litmus Test for the Bitwise Processing-in-Memory Paradigm

This paper describes an analytical modeling tool called Bitlet that can ...
research
12/08/2020

DeepNVM++: Cross-Layer Modeling and Optimization Framework of Non-Volatile Memories for Deep Learning

Non-volatile memory (NVM) technologies such as spin-transfer torque magn...
research
07/20/2018

CRAM: Efficient Hardware-Based Memory Compression for Bandwidth Enhancement

This paper investigates hardware-based memory compression designs to inc...
research
06/27/2022

Efficient Deep Learning Using Non-Volatile Memory Technology

Embedded machine learning (ML) systems have now become the dominant plat...

Please sign up or login with your details

Forgot password? Click here to reset