DeepAI AI Chat
Log In Sign Up

Learn Locally, Correct Globally: A Distributed Algorithm for Training Graph Neural Networks

11/16/2021
by   Morteza Ramezani, et al.
10

Despite the recent success of Graph Neural Networks (GNNs), training GNNs on large graphs remains challenging. The limited resource capacities of the existing servers, the dependency between nodes in a graph, and the privacy concern due to the centralized storage and model learning have spurred the need to design an effective distributed algorithm for GNN training. However, existing distributed GNN training methods impose either excessive communication costs or large memory overheads that hinders their scalability. To overcome these issues, we propose a communication-efficient distributed GNN training technique named Learn Locally, Correct Globally (LLCG). To reduce the communication and memory overhead, each local machine in LLCG first trains a GNN on its local data by ignoring the dependency between nodes among different machines, then sends the locally trained model to the server for periodic model averaging. However, ignoring node dependency could result in significant performance degradation. To solve the performance degradation, we propose to apply Global Server Corrections on the server to refine the locally learned models. We rigorously analyze the convergence of distributed methods with periodic model averaging for training GNNs and show that naively applying periodic model averaging but ignoring the dependency between nodes will suffer from an irreducible residual error. However, this residual error can be eliminated by utilizing the proposed global corrections to entail fast convergence rate. Extensive experiments on real-world datasets show that LLCG can significantly improve the efficiency without hurting the performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/27/2021

Node-wise Localization of Graph Neural Networks

Graph neural networks (GNNs) emerge as a powerful family of representati...
05/31/2022

Distributed Graph Neural Network Training with Periodic Historical Embedding Synchronization

Despite the recent success of Graph Neural Networks (GNNs), it remains c...
03/02/2023

Boosting Distributed Full-graph GNN Training with Asynchronous One-bit Communication

Training Graph Neural Networks (GNNs) on large graphs is challenging due...
08/01/2022

Locally Supervised Learning with Periodic Global Guidance

Locally supervised learning aims to train a neural network based on a lo...
03/16/2023

GLASU: A Communication-Efficient Algorithm for Federated Learning with Vertically Distributed Graph Data

Vertical federated learning (VFL) is a distributed learning paradigm, wh...
05/10/2022

SmartSAGE: Training Large-scale Graph Neural Networks using In-Storage Processing Architectures

Graph neural networks (GNNs) can extract features by learning both the r...