GIST: Distributed Training for Large-Scale Graph Convolutional Networks

02/20/2021
by   Cameron R. Wolfe, et al.
0

The graph convolutional network (GCN) is a go-to solution for machine learning on graphs, but its training is notoriously difficult to scale in terms of both the size of the graph and the number of model parameters. These limitations are in stark contrast to the increasing scale (in data size and model size) of experiments in deep learning research. In this work, we propose GIST, a novel distributed approach that enables efficient training of wide (overparameterized) GCNs on large graphs. GIST is a hybrid layer and graph sampling method, which disjointly partitions the global model into several, smaller sub-GCNs that are independently trained across multiple GPUs in parallel. This distributed framework improves model performance and significantly decreases wall-clock training time. GIST seeks to enable large-scale GCN experimentation with the goal of bridging the existing gap in scale between graph machine learning and deep learning.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/05/2022

Scaling R-GCN Training with Graph Summarization

Training of Relation Graph Convolutional Networks (R-GCN) does not scale...
research
12/09/2020

Distributed Training of Graph Convolutional Networks using Subgraph Approximation

Modern machine learning techniques are successfully being adapted to dat...
research
07/02/2021

ResIST: Layer-Wise Decomposition of ResNets for Distributed Training

We propose , a novel distributed training protocol for Residual Networks...
research
08/10/2017

Distributed Training Large-Scale Deep Architectures

Scale of data and scale of computation infrastructures together enable t...
research
12/17/2021

Community-based Layerwise Distributed Training of Graph Convolutional Networks

The Graph Convolutional Network (GCN) has been successfully applied to m...
research
06/04/2019

An Efficient Graph Convolutional Network Technique for the Travelling Salesman Problem

This paper introduces a new learning-based approach for approximately so...

Please sign up or login with your details

Forgot password? Click here to reset