Efficient Memory Management for GPU-based Deep Learning Systems

02/19/2019
by   Junzhe Zhang, et al.
0

GPU (graphics processing unit) has been used for many data-intensive applications. Among them, deep learning systems are one of the most important consumer systems for GPU nowadays. As deep learning applications impose deeper and larger models in order to achieve higher accuracy, memory management becomes an important research topic for deep learning systems, given that GPU has limited memory size. Many approaches have been proposed towards this issue, e.g., model compression and memory swapping. However, they either degrade the model accuracy or require a lot of manual intervention. In this paper, we propose two orthogonal approaches to reduce the memory cost from the system perspective. Our approaches are transparent to the models, and thus do not affect the model accuracy. They are achieved by exploiting the iterative nature of the training algorithm of deep learning to derive the lifetime and read/write order of all variables. With the lifetime semantics, we are able to implement a memory pool with minimal fragments. However, the optimization problem is NP-complete. We propose a heuristic algorithm that reduces up to 13.3 complexity. With the read/write semantics, the variables that are not in use can be swapped out from GPU to CPU to reduce the memory footprint. We propose multiple swapping strategies to automatically decide which variable to swap and when to swap out (in), which reduces the memory cost by up to 34.2 communication overhead.

READ FULL TEXT
research
07/30/2023

Exploiting Parallel Memory Write Requests for Covert Channel Attacks in Integrated CPU-GPU Systems

In heterogeneous SoCs, accelerators like integrated GPUs (iGPUs) are int...
research
07/11/2019

Profiling based Out-of-core Hybrid Method for Large Neural Networks

GPUs are widely used to accelerate deep learning with NNs (NNs). On the ...
research
03/05/2019

FUSE: Fusing STT-MRAM into GPUs to Alleviate Off-Chip Memory Access Overheads

In this work, we propose FUSE, a novel GPU cache system that integrates ...
research
01/13/2018

SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks

Going deeper and wider in neural architectures improves the accuracy, wh...
research
07/28/2021

Clones in Deep Learning Code: What, Where, and Why?

Deep Learning applications are becoming increasingly popular. Developers...
research
05/16/2018

Recent Advances in Overcoming Bottlenecks in Memory Systems and Managing Memory Resources in GPU Systems

This article features extended summaries and retrospectives of some of t...

Please sign up or login with your details

Forgot password? Click here to reset