Scaling R-GCN Training with Graph Summarization

03/05/2022
by   Alessandro Generale, et al.
0

Training of Relation Graph Convolutional Networks (R-GCN) does not scale well with the size of the graph. The amount of gradient information that needs to be stored during training for real-world graphs is often too large for the amount of memory available on most GPUs. In this work, we experiment with the use of graph summarization techniques to compress the graph and hence reduce the amount of memory needed. After training the R-GCN on the graph summary, we transfer the weights back to the original graph and attempt to perform inference on it. We obtain reasonable results on the AIFB, MUTAG and AM datasets. This supports the importance and relevancy of graph summarization methods, whose smaller graph representations scale and reduce the computational overhead involved with novel machine learning models dealing with large Knowledge Graphs. However, further experiments are needed to evaluate whether this also holds true for very large graphs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/20/2021

GIST: Distributed Training for Large-Scale Graph Convolutional Networks

The graph convolutional network (GCN) is a go-to solution for machine le...
research
03/01/2021

GEBT: Drawing Early-Bird Tickets in Graph Convolutional Network Training

Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art...
research
12/09/2020

Distributed Training of Graph Convolutional Networks using Subgraph Approximation

Modern machine learning techniques are successfully being adapted to dat...
research
07/17/2020

Simplification of Graph Convolutional Networks: A Matrix Factorization-based Perspective

In recent years, substantial progress has been made on Graph Convolution...
research
02/22/2023

Random Projection Forest Initialization for Graph Convolutional Networks

Graph convolutional networks (GCNs) were a great step towards extending ...
research
11/01/2021

GCNear: A Hybrid Architecture for Efficient GCN Training with Near-Memory Processing

Recently, Graph Convolutional Networks (GCNs) have become state-of-the-a...
research
04/06/2022

Accelerating Backward Aggregation in GCN Training with Execution Path Preparing on GPUs

The emerging Graph Convolutional Network (GCN) has now been widely used ...

Please sign up or login with your details

Forgot password? Click here to reset