EMOGI: Efficient Memory-access for Out-of-memory Graph-traversal In GPUs

06/12/2020
by   Seung Won Min, et al.
0

Modern analytics and recommendation systems are increasingly based on graph data that capture the relations between entities being analyzed. Practical graphs come in huge sizes, offer massive parallelism, and are stored in sparse-matrix formats such as CSR. To exploit the massive parallelism, developers are increasingly interested in using GPUs for graph traversal. However, due to their sizes, graphs often do not fit into the GPU memory. Prior works have either used input data pre-processing/partitioning or UVM to migrate chunks of data from the host memory to the GPU memory. However, the large, multi-dimensional, and sparse nature of graph data presents a major challenge to these schemes and results in significant amplification of data movement and reduced effective data throughput. In this work, we propose EMOGI, an alternative approach to traverse graphs that do not fit in GPU memory using direct cacheline-sized access to data stored in host memory. This paper addresses the open question of whether a sufficiently large number of overlapping cacheline-sized accesses can be sustained to 1) tolerate the long latency to host memory, 2) fully utilize the available bandwidth, and 3) achieve favorable execution performance. We analyze the data access patterns of several graph traversal applications in GPU over PCIe using an FPGA to understand the cause of poor external bandwidth utilization. By carefully coalescing and aligning external memory requests, we show that we can minimize the number of PCIe transactions and nearly fully utilize the PCIe bandwidth even with direct cache-line accesses to the host memory. EMOGI achieves 2.92× speedup on average compared to the optimized UVM implementations in various graph traversal applications. We also show that EMOGI scales better than a UVM-based solution when the system uses higher bandwidth interconnects such as PCIe 4.0.

READ FULL TEXT
research
03/04/2021

Large Graph Convolutional Network Training with GPU-Oriented Data Communication Architecture

Graph Convolutional Networks (GCNs) are increasingly adopted in large-sc...
research
03/09/2022

GPU-Initiated On-Demand High-Throughput Storage Access in the BaM System Architecture

Graphics Processing Units (GPUs) have traditionally relied on the host C...
research
04/03/2019

GraphCage: Cache Aware Graph Processing on GPUs

Efficient Graph processing is challenging because of the irregularity of...
research
10/08/2019

Performance Impact of Memory Channels on Sparse and Irregular Algorithms

Graph processing is typically considered to be a memory-bound rather tha...
research
12/10/2017

A Flexible High-Bandwidth Low-Latency Multi-Port Memory Controller

Multi-port memory controllers (MPMCs) have become increasingly important...
research
09/09/2022

PGAbB: A Block-Based Graph Processing Framework for Heterogeneous Platforms

Designing flexible graph kernels that can run well on various platforms ...
research
06/16/2020

ZnG: Architecting GPU Multi-Processors with New Flash for Scalable Data Analysis

We propose ZnG, a new GPU-SSD integrated architecture, which can maximiz...

Please sign up or login with your details

Forgot password? Click here to reset