Log In Sign Up

Fast Graph Representation Learning with PyTorch Geometric

by   Matthias Fey, et al.
TU Dortmund

We introduce PyTorch Geometric, a library for deep learning on irregularly structured input data such as graphs, point clouds and manifolds, built upon PyTorch. In addition to general graph data structures and processing methods, it contains a variety of recently published methods from the domains of relational learning and 3D data processing. PyTorch Geometric achieves high data throughput by leveraging sparse GPU acceleration, by providing dedicated CUDA kernels and by introducing efficient mini-batch handling for input examples of different size. In this work, we present the library in detail and perform a comprehensive comparative study of the implemented methods in homogeneous evaluation scenarios.


page 1

page 2

page 3

page 4


Learning distributed representations of graphs with Geo2DR

We present Geo2DR, a Python library for unsupervised learning on graph-s...

Deep Learning with Dynamic Computation Graphs

Neural networks that compute over graph structures are a natural fit for...

cilantro: a lean, versatile, and efficient library for point cloud data processing

We introduce cilantro, an open-source C++ library for geometric and gene...

Octree guided CNN with Spherical Kernels for 3D Point Clouds

We propose an octree guided neural network architecture and spherical co...

Learning Propagation for Arbitrarily-structured Data

Processing an input signal that contains arbitrary structures, e.g., sup...

Box Embeddings: An open-source library for representation learning using geometric structures

A major factor contributing to the success of modern representation lear...

Neighborhood Growth Determines Geometric Priors for Relational Representation Learning

The problem of identifying geometric structure in heterogeneous, high-di...

1 Introduction

Graph Neural Networks

(GNNs) recently emerged as a powerful approach for representation learning on graphs, point clouds and manifolds (Bronstein et al., 2017; Kipf & Welling, 2017). Similar to the concepts of convolutional and pooling layers on regular domains, GNNs are able to (hierarchically) extract localized embeddings by passing, transforming, and aggregating information (Bronstein et al., 2017; Gilmer et al., 2017; Battaglia et al., 2018; Ying et al., 2018).

However, implementing GNNs is challenging, as high GPU throughput needs to be achieved on highly sparse and irregular data of varying size. Here, we introduce PyTorch Geometric (PyG), a geometric deep learning extension library for PyTorch (Paszke et al., 2017) which achieves high performance by leveraging dedicated CUDA kernels. Following a simple message passing API, it bundles most of the recently proposed convolutional and pooling layers into a single and unified framework. All implemented methods support both CPU and GPU computations and follow an immutable data flow paradigm that enables dynamic changes in graph structures through time. PyG is released under the MIT license and is available on GitHub.111GitHub repository: It is thoroughly documented and provides accompanying tutorials and examples as a first starting point.222Documentation:

2 Overview

In PyG, we represent a graph by a node feature matrix and a sparse adjacency tuple , where encodes edge indices in coordinate (COO) format and (optionally) holds -dimensional edge features. All user facing APIs, e.g., data loading routines, multi-GPU support, data augmentation or model instantiations are heavily inspired by PyTorch to keep them as familiar as possible.

Neighborhood Aggregation.

Generalizing the convolutional operator to irregular domains is typically expressed as a neighborhood aggregation or message passing scheme (Gilmer et al., 2017)


where denotes a differentiable, permutation invariant function, e.g., sum, mean or max, and and denote differentiable functions, e.g., MLPs.

Figure 1: Computation scheme of a GNN layer by leveraging gather and scatter methods based on edge indices , hence alternating between node parallel space and edge parallel space.

In practice, this can be achieved by gathering and scattering of node features and making use of broadcasting for element-wise computation of and , as visualized in Figure 1. Although working on irregularly structured input, this scheme can be heavily accelerated by the GPU.

We provide the user with a general MessagePassing interface to allow for rapid and clean prototyping of new research ideas. To use, users only need to define the methods , i.e., message, and , i.e., update, as well as chosing an aggregation scheme . For implementing , node features are automatically mapped to the respective source and target nodes.

Almost all recently proposed neighborhood aggregation functions can be lifted to this interface, including (but not limited to) the methods already integrated into PyG: For learning on arbitrary graphs we have already implemented GCN (Kipf & Welling, 2017) and its simplified version (SGC) from Wu et al. (2019), the spectral chebyshev and ARMA filter convolutions (Defferrard et al., 2016; Bianchi et al., 2019), GraphSAGE (Hamilton et al., 2017), the attention-based operators GAT (Veličković et al., 2018) and AGNN (Thekumparampil et al., 2018), the Graph Isomorphism Network (GIN) from Xu et al. (2019), and the Approximate Personalized Propagation of Neural Predictions (APPNP) operator (Klicpera et al., 2019). For learning on point clouds, manifolds and graphs with multi-dimensional edge features, we provide the relational GCN operator from Schlichtkrull et al. (2018), PointNet++ (Qi et al., 2017), PointCNN (Li et al., 2018), and the continuous kernel-based methods MPNN (Gilmer et al., 2017), MoNet (Monti et al., 2017), SplineCNN (Fey et al., 2018) and the edge convolution operator (EdgeCNN) from Wang et al. (2018b).

Global Pooling.

PyG also supports graph-level outputs as opposed to node-level outputs by providing a variety of readout

functions such as global add, mean or max pooling. We additionaly offer more sophisticated methods such as set-to-set

(Vinyals et al., 2016), sort pooling (Zhang et al., 2018) or the global soft attention layer from Li et al. (2016).

Hierarchical Pooling.

To further extract hierarchical information and to allow deeper GNN models, various pooling approaches can be applied in a spatial or data-dependent manner. We currently provide implementation examples for Graclus (Dhillon et al., 2007; Fagginger Auer & Bisseling, 2011) and voxel grid pooling (Simonovsky & Komodakis, 2017), the iterative farthest point sampling algorithm (Qi et al., 2017) followed by -NN or query ball graph generation (Qi et al., 2017; Wang et al., 2018b), and differentiable pooling mechanisms such as DiffPool (Ying et al., 2018) and pooling (Gao & Ji, 2018; Cangea et al., 2018).

Mini-batch Handling.

Our framework supports batches of multiple graph instances (of potentially different size) by automatically creating a single (sparse) block-diagonal adjacency matrix and concatenating feature matrices in the node dimension. Therefore, neighborhood aggregation methods can be applied without modification, since no messages are exchanged between disconnected graphs. In addition, an automatically generated assignment vector ensures that node-level information is not aggregated across graphs,

e.g., when executing global aggregation operators.

Processing of Datasets.

We provide a consistent data format and an easy-to-use interface for the creation and processing of datasets, both for large datasets and for datasets that can be kept in memory during training. In order to create new datasets, users just need to read/download their data and convert it to the PyG data format in the respective process method. In addition, datasets can be modified by the use of transforms, which take in separate graphs and transform them, e.g., for data augmentation, for enhancing node features with synthetic structural graph properties, to automatically generate graphs from point clouds or to sample point clouds from meshes.

PyG already supports a lot of common benchmark datasets often found in literature which are automatically downloaded and processed on first instantiation. In detail, we provide over 60 graph kernel benchmark datasets333Kernel datasets: (Kersting et al., 2016), e.g., PROTEINS or IMDB-BINARY, the citation graphs Cora, CiteSeer, PubMed and Cora-Full (Sen et al., 2008; Bojchevski & Günnemann, 2018), the Coauthor CS/Physics and Amazon Computers/Photo datasets from Shchur et al. (2018), the molecule datasets QM7b (Montavon et al., 2013) and QM9 (Ramakrishnan et al., 2014), and the protein-protein interaction graphs from Hamilton et al. (2017). In addition, we provide embedded datasets like MNIST superpixels (Monti et al., 2017), FAUST (Bogo et al., 2014), ModelNet10/40 (Wu et al., 2015), ShapeNet (Chang et al., 2015), COMA (Ranjan et al., 2018), and the PCPNet dataset from Guerrero et al. (2018).

3 Empirical Evaluation

We evaluate the correctness of the implemented methods by performing a comprehensive comparative study in homogeneous evaluation scenarios. Descriptions and statistics of all used datasets can be found in Appendix A

. For all experiments, we tried to follow the hyperparameter setup of the respective papers as closely as possible. The individual experimental setups can be derived and all experiments can be replicated from the code provided at our GitHub repository.


Semi-supervised Node Classification.

Method Cora CiteSeer PubMed
Fixed Random Fixed Random Fixed Random
Cheby 81.4 0.7 77.8 2.2 70.2 1.0 67.7 1.7 78.4 0.4 75.8 2.2
GCN 81.5 0.6 79.4 1.9 71.1 0.7 68.1 1.7 79.0 0.6 77.4 2.4
GAT 83.1 0.4 81.0 1.4 70.8 0.5 69.2 1.9 78.5 0.3 78.3 2.3
SGC 81.7 0.1 80.2 1.6 71.3 0.2 68.7 1.6 78.9 0.1 76.5 2.4
ARMA 82.8 0.6 80.7 1.4 72.3 1.1 68.9 1.6 78.8 0.3 77.7 2.6
APPNP 83.3 0.5 82.2 1.5 71.8 0.5 70.0 1.4 80.1 0.2 79.4 2.2
Table 1: Semi-supervised node classification with both fixed and random splits.

We perform semi-supervised node classification (cf. Table 1) by reporting average accuracies of (a) 100 runs for the fixed train/val/test split from Kipf & Welling (2017), and (b) 100 runs of randomly initialized train/val/test splits as suggested by Shchur et al. (2018), where we additionally ensure uniform class distribution on the train split.

Nearly all experiments show a high reproducibility of the results reported in the respective papers. However, test performance is worse for all models when using random data splits. Among the experiments, the APPNP operator (Klicpera et al., 2019) generally performs best, with ARMA (Bianchi et al., 2019), SGC (Wu et al., 2019), GCN (Kipf & Welling, 2017) and GAT (Veličković et al., 2018) following closely behind.

Graph Classification.



GCN 74.6 7.7 73.1 3.8 80.6 2.1 72.6 4.5 89.3 3.3
SAGE 74.9 8.7 73.8 3.6 79.7 1.7 72.4 3.6 89.1 1.9
GIN-0 85.7 7.7 72.1 5.1 79.3 2.7 72.8 4.5 89.6 2.6
GIN- 83.4 7.5 72.6 4.9 79.8 2.4 72.1 5.1 90.3 3.0


Graclus 77.1 7.2 73.0 4.1 79.6 2.0 72.2 4.2 88.8 3.2
76.3 7.5 72.7 4.1 79.7 2.2 72.5 4.6 87.6 2.4
DiffPool 85.0 10.3 75.1 3.5 78.9 2.3 72.6 3.9 92.1 2.6


SAGE w/o JK 73.7 7.8 72.7 3.6 79.6 2.4 72.1 4.4 87.9 1.9
GlobalAttention 74.6 8.0 72.5 4.5 79.6 2.2 72.3 3.8 87.4 2.5
Set2Set 73.7 6.9 73.6 3.7 79.6 2.3 72.2 4.2 89.6 2.4
SortPool 77.3 8.9 72.4 4.1 77.7 3.1 72.4 3.8 74.9 6.7
Table 2: Graph classification.

We report the average accuracy of 10-fold cross validation on a number of common benchmark datasets (cf. Table 2

) where we randomly sample a training fold to serve as a validation set. We only make use of discrete node features. In case they are not given, we use one-hot encodings of node degrees as feature input. For all experiments, we use the global mean operator to obtain graph-level outputs. Inspired by the Jumping Knowledge framework

(Xu et al., 2018), we compute graph-level outputs after each convolutional layer and combine them via concatenation. For evaluating the (global) pooling operators, we use the GraphSAGE operator as our baseline. We omit Jumping Knowledge when comparing global pooling operators, and hence report an additional baseline based on global mean pooling. For each dataset, we tune (1) the number of hidden units and (2) the number of layers with respect to the validation set.

Except for DiffPool (Ying et al., 2018), (global) pooling operators do not perform as benefically as expected to their respective (flat) counterparts, especially when baselines are enhanced by Jumping Knowledge (Xu et al., 2018). However, the potential of more sophisticated approaches may not be well-reflected on these simple benchmark tasks (Cai & Wang, 2018). Among the flat GNN approaches, the GIN layer (Xu et al., 2019) generally achieves the best results.

Method ModelNet10
MPNN 92.07
PointNet++ 92.51
EdgeCNN 92.62
SplineCNN 92.65
PointCNN 93.28
Table 4: Training runtime comparison.
Dataset Epochs Method DGL PyG
Cora 200 GCN 4.2s 0.7s
GAT 33.4s 2.2s
CiteSeer 200 GCN 3.9s 0.8s
GAT 28.9s 2.4s
PubMed 200 GCN 12.7s 2.0s
GAT 87.7s 12.3s
MUTAG 50 R-GCN 3.3s 2.4s
Table 3: Point cloud classification.

Point Cloud Classification.

We evaluate various point cloud methods on ModelNet10 (Wu et al., 2015) where we uniformly sample 1,024 points from mesh surfaces based on face area (cf. Table 4). As hierarchical pooling layers, we use the iterative farthest point sampling algorithm followed by a new graph generation based on a larger query ball (PointNet++ (Qi et al., 2017), MPNN (Gilmer et al., 2017) and SplineCNN (Fey et al., 2018)) or based on a fixed number of nearest neighbors (EdgeCNN (Wang et al., 2018b) and PointCNN (Li et al., 2018)). We have taken care to use approximately the same number of parameters for each model.

All approaches perform nearly identically with PointCNN (Li et al., 2018) taking a slight lead. We attribute this to the fact that all operators are based on similar principles and might have the same expressive power for the given task.

Runtime Experiments.

We conduct several experiments on a number of dataset-model pairs to report the runtime of a whole training procedure obtained on a single NVIDIA GTX 1080 Ti (cf. Table 4). As it shows, PyG is very fast despite working on sparse data. Compared to the Deep Graph Library (DGL) 0.1.3 (Wang et al., 2018a), PyG trains models up to 15 times faster.

4 Roadmap and Conclusion

We presented the PyTorch Geometric framework for fast representation learning on graphs, point clouds and manifolds. We are actively working to further integrate existing methods and plan to quickly integrate future methods into our framework. All researchers and software engineers are invited to collaborate with us in extending its scope.


This work has been supported by the German Research Association (DFG) within the Collaborative Research Center SFB 876, Providing Information by Resource-Constrained Analysis, projects A6 and B2. We thank Moritz Ludolph for his contribution to PyTorch Geometric and Christopher Morris for proofreading and helpful advice.


Appendix A Datasets

Dataset Graphs Nodes Edges Features Classes Label rate
Cora 1 2,708 5,278 1,433 7 0.052
CiteSeer 1 3,327 4,552 3,703 6 0.036
PubMed 1 19,717 44,324 500 3 0.003
MUTAG 188 17.93 19.79 7 2 0.800
PROTEINS 1,113 39.06 72.82 3 2 0.800
COLLAB 5,000 74.49 2,457.22 3 0.800
IMDB-BINARY 1,000 19.77 96.53 2 0.800
REDDIT-BINARY 2,00 429.63 497.754 2 0.800
ModelNet10 4,899 1,024 19,440 10 0.815
Table 5: Statistics of the datasets used in the experiments.

We give detailed descriptions and statistics (cf. Table 5) of the datasets used in our experiments:

Citation Networks.

In the citation network datasets Cora, Citeseer and Pubmed nodes represent documents and edges represent citation links. The networks contain bag-of-words feature vectors for each document. We treat the citation links as (undirected) edges. For training, we use 20 labels per class.

Social Network Datasets.

COLLAB is derived from three public scientific collaboration datasets. Each graph corresponds to an ego-network of different researchers from each field with the task to classify each graph to the field the corresponding researcher belongs to. IMDB-BINARY is a movie collaboration dataset where each graph corresponds to an ego-network of actors/actresses. An edge is drawn between two actors/actresses if they appear in the same movie. The task is to classify the genre of the graph. REDDIT-BINARY is a online discussion dataset where each graph corresponds to a thread. An edge is drawn between two users if one of them responded to another’s comment. The task is to classify each graph to the community/subreddit it belongs to.

Bioinformatic Datasets.

MUTAG is a dataset consisting of mutagenetic aromatic and heteroaromatic nitro compounds. PROTEINS holds a set of proteins represented by graphs. Nodes represent secondary structure elements (SSEs) which are connected whenever there are neighbors either in the amino acid sequence or in 3D space.

3D Object Datasets.

ModelNet10 is an orientation-aligned dataset of CAD models. Each model corresponds to exactly one out of 10 object categories. Categories were chosen based on a list of the most common object categories in the world.