Efficient Graph Deep Learning in TensorFlow with tf_geometric

01/27/2021 ∙ by Jun Hu, et al. ∙ Hefei University of Technology 0

We introduce tf_geometric, an efficient and friendly library for graph deep learning, which is compatible with both TensorFlow 1.x and 2.x. tf_geometric provides kernel libraries for building Graph Neural Networks (GNNs) as well as implementations of popular GNNs. The kernel libraries consist of infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc. These infrastructures enable tf_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc.; therefore, tf_geometric can be used for a variety of graph deep learning tasks, such as transductive node classification, inductive node classification, link prediction, and graph classification. Based on the kernel libraries, tf_geometric implements a variety of popular GNN models for different tasks. To facilitate the implementation of GNNs, tf_geometric also provides some other libraries for dataset management, graph sampling, etc. Different from existing popular GNN libraries, tf_geometric provides not only Object-Oriented Programming (OOP) APIs, but also Functional APIs, which enable tf_geometric to handle advanced graph deep learning tasks such as graph meta-learning. The APIs of tf_geometric are friendly, and they are suitable for both beginners and experts. In this paper, we first present an overview of tf_geometric's framework. Then, we conduct experiments on some benchmark datasets and report the performance of several popular GNN models implemented by tf_geometric.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

tf_geometric

Efficient and Friendly Graph Neural Network Library for TensorFlow 1.x and 2.x


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Graph is a powerful data structure that can be used to model relational data, and it is widely used by real-world applications. In recent years, Graph Neural Networks (GNNs) emerge as powerful tools for deep learning on graphs, which aims to understand the semantics of graph data. GNNs have been successfully applied to a variety of tasks in different fields, such as recommendation systems (Fan et al., 2019; Wang et al., 2020a; Tan et al., 2020), question answering systems (Li et al., 2019; Fang et al., 2020; Hu et al., 2019)

, neural machine translation 

(Bastings et al., 2017; Marcheggiani et al., 2018), traffic prediction (Wang et al., 2020b; Cui et al., 2020), drug discovery and design (Fout et al., 2017; Gaudelet et al., 2020), diagnosis prediction (Li et al., 2020; Sampathkumar, 2021), and physical simulation (Pfaff et al., 2021; Sanchez-Gonzalez et al., 2020).

Figure 1. An Example of Message Aggregation
(a) Common Graph Pooling
(b) Hierarchical Graph Pooling
Figure 2. Examples of Graph Pooling

Due to the properties of graph data, such as sparsity and irregularity, it is challenging to implement efficient and friendly GNN libraries. It is known that the most challenging problem for implementing GNNs is the aggregation of graph data. Aggregation is the fundamental operation for most GNNs. There are mainly two types of aggregation in GNNs: message aggregation and graph pooling. (1) The message aggregation, which is also called message passing, aims to aggregate multiple messages between a node and its context and reduce them into one element. Fig. 1 shows an example of message aggregation in Graph Convolutional Networks (GCNs). In the example, the context of node consists of its neighbor nodes and itself. The context nodes pass multiple messages to

and these messages are reduced to a feature vector, which is then used as the high-order representation of node

. (2) The graph pooling aims to aggregate elements in graphs or clusters and reduce them into high-order graph-level or cluster-level representations. Fig. 2(a) shows an example of graph pooling for learning graph-level representations. The representations of all the nodes in the graph are aggregated to generate the representation of the graph. In some complex graph pooling models, the graph pooling layers are used to obtain a pooled graph rather than a graph representation vector (Ying et al., 2018; Lee et al., 2019; Ranjan et al., 2020). For example, as shown in the hierarchical graph pooling example in Fig. 2(b)

, the graph pooling operation aggregates nodes in each cluster and reduces them as a node in the pooled graph. The two main types of aggregation allow researchers and engineers to design complex GNNs, and thus proper solutions for aggregation on graphs are imperative for building elegant GNN models. However, it is non-trivial to design proper aggregation solutions for sparse and irregular graph data. Most intuitive solutions, such as padding and masking, usually suffer from the memory and efficiency problem, whereas many efficient solutions, such as sparse matrix multiplication, require a lot of complex tricks to accomplish advanced aggregations. Moreover, most efficient solutions require users to use specific data structures to organize the graph data.

We develop tf_geometric, an efficient and friendly GNN library for deep learning on sparse and irregular graph data, which is compatible with both TensorFlow 1.x and 2.x. tf_geometric provides a unified solution for GNNs, which mainly consists of kernel libraries for building graph neural networks and implementations of popular GNNs. The kernel libraries contain infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc. In particular, the graph data structure and map-reduce framework provide an elegant and efficient way for aggregation on graphs. The kernel libraries enable tf_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc.; therefore, tf_geometric can be used for a variety of graph deep learning tasks, such as transductive node classification, inductive node classification, link prediction, and graph classification. Based on the kernel libraries, a variety of popular GNN models for different tasks are implemented as APIs for tf_geometric. To facilitate the implementation of GNNs, tf_geometric also provides some other libraries for dataset management, graph sampling, etc. Different from existing popular GNN libraries, tf_geometric provides not only OOP APIs, but also Functional APIs, which enable tf_geometric to handle advanced graph deep learning tasks such as graph meta-learning. The APIs of tf_geometric are friendly, and they are suitable for both beginners and experts. tf_geometric is available on GitHub222https://github.com/CrawlScript/tf_geometric. The features of tf_geometric are thoroughly documented333https://tf-geometric.readthedocs.io and a collection of accompanying tutorials and examples are also provided in the documentation.

2. Overview

tf_geometric mainly consists of kernel libraries for building graph neural networks and implementations of popular GNNs. Besides, some other libraries such as dataset management and graph sampling are also provided to facilitate the implementation of GNNs. In this section, we provide an overview of different parts of the tf_geometric.

Figure 3. The Framework of tf_geometric
Figure 4. tfg.Graph and tfg.BatchGraph

2.1. Kernel Libraries

The framework of the kernel libraries is shown in Fig. 3. As shown in Fig. 3, the kernel libraries consist of several fundamental components as infrastructures for building efficient GNNs, including graph data structures, graph map-reduce framework, graph mini-batch strategy, etc. These infrastructures enable tf_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc., and therefore tf_geometric can be used for a variety of graph deep learning tasks, such as transductive node classification, inductive node classification, link prediction, and graph classification. In this section, we will introduce these infrastructures in detail.

2.1.1. Graph Data Structure

tf_geometric has two core graph data structures (classes): Graph and BatchGraph, which are used to model a single graph and a batch of graphs, respectively. In this section, we first introduce some notations for graph data in graph deep learning and then show how tf_geometric organizes graph data with its graph data structures.

Generally, a graph can be represented as , where is the set of nodes and denotes the set of edges. In graph deep learning, the graph is usually presented as , where and are the node feature matrix and adjacency matrix, respectively. The node feature matrix contains features of all the nodes in the graph, and its row represents the -dimensional feature vector of the node in the graph. The adjacency matrix contains the edge information, where a positive entry indicates these exists an edge from the node to the node with weight . In some tasks, such as node classification and graph classification, the node label or graph label information is also required. The label information is denoted as , and the graph can be further represented as . Usually, is presented as a list of integer label indices or a matrix of encoded label vectors.

The Graph class is used to model a single graph. A graph can be modeled as a Graph object , where , , and correspond to , and , respectively. The node feature matrix and label information

are modeled as dense tensors

and respectively, while the adjacency matrix is presented as a sparse matrix in coordinate (COO) format, which consists of the indices of entries and the values of entries . It is known that many aggregation operations in GNNs can benefit a lot from the COO format sparse adjacency matrix. Moreover, the COO format data is friendly to many advanced aggregation operations of TensorFlow, such as the tf.math.segment_xxxx operations, which are important for building efficient GNN models. Note that either the input data or the intermediate output tensors can be used to construct graph objects. Especially, since the construction of Graph objects does not involve any deep copy operations, it is a differentiable operation that can be applied to any intermediate output tensors that require gradients. For each Graph object, the GNN models in tf_geometric can take advantage of parallelism capabilities of deep learning frameworks to efficiently process information in the graph. However, due to the limitation of most deep learning frameworks, it is difficult to process information in different graph objects simultaneously. Therefore, for tasks dealing with multiple graphs, such as inductive node classification and graph classification, tf_geometric introduces the BatchGraph class, which allow the GNN models to process information in multiple graphs in parallel.

A BatchGraph object stores the information in multiple graph objects, and it enables the parallel processing of data from different graphs by virtualizing multiple graphs as a single graph. The BatchGraph class is a subclass of the Graph class, and it can be denoted as , where , , , and are attributes inherited from the superclass Graph, and is a list of integers indicating which graph each node belong to. tf_geometric first leverages a reindexing trick to reassign indices to nodes from different graphs such that each node has a unique index in the BatchGraph. The left part of Fig. 4 shows an example of combining multiple Graph objects into a BatchGraph object. The node of the graph is reindexed by adding an offset value, which is the number of nodes in the previous graphs. In the example, the offset of the second graph and third graph is 4 and 6 respectively. Therefore, the index of the first node in the second graph is reindexed as , and the index of the second node in the third graph is reassigned as . After reindexing, the attributes of the graphs such as and are then adjusted and combined based on the reassigned node indices. For example, the of the given graphs are first replaced with the reindexed node index and they are then stacked together to form a new for the BatchGraph. Note that converting Graph objects into a BatchGraph object does not modify node features and the connectivity between nodes. Thus, for most GNNs, applying them on multiple graphs iteratively is equivalent to applying them on the corresponding BatchGraph. As a result, applying GNN operations on a BatchGraph automatically enables the parallel processing of multiple graphs, which brings dramatic performance improvement in computational efficiency. As shown in the right part of Fig. 4, the GNNs learn high-order features for nodes from different graphs, which can be further used for different multi-graph tasks. For example, the learned node features can be directly used for inductive node classification tasks. Moreover, the learned node features can be aggregated into graph representations (graph pooling) based on the of the BatchGraph, which can be used for graph-level tasks such as graph classification.

(a) Map-Reduce Workflow for Normalized Attention Scores in GAT
(b) Reduce by Key
Figure 5. Graph Map-Reduce Framework

2.1.2. Graph Map-Reduce Framework

Many complex GNNs can be considered as a combination of simple map-reduce operations on graphs. Usually, map and reduce operations correspond to transformation and aggregation operations on graphs. The tf_geometric kernel provides a graph map-reduce framework, including basic and advanced map and reduce operations on graphs and graph map-reduce workflows.

Fig. 5(a) shows an example of map-reduce workflow, which computes the normalized attention scores for a Graph Attention Network (GAT). In the example, each colored edge contains the feature vectors of a node and one of its neighbor nodes. The first map operation parallelly transforms a batch of edges into unnormalized attention scores. Map operations do not involve the interaction between elements, and most of them can be implemented with general TensorFlow operations. Here, the first map operation is implemented with a TensorFlow dense layer. GAT requires the attention score to be normalized by softmax normalization. To achieve this, a reduce operation is introduced to aggregate the unnormalized attention scores of neighbors and obtain the denominator for the softmax normalization. Details of the reducer are shown in Fig. 5(b). The reducer aggregates information for each group and the reduce key indicates which group each element belongs to. In this case, the reduce key is node index and the unnormalized attention scores of the neighbor nodes of a node share the same reduce key. Different from common deep learning models for images and text, where most reduce operations are designed for tensors with regular shapes, the graph deep learning models usually require the reducer to deal with irregular data. Thus, many general reduce operations, such as tf.reduce_sum and tf.reduce_max, can not be used as reducers for GNNs. To address this problem, tf_geometric takes advantage of several advanced APIs of TensorFlow to build efficient reducers for irregular graph data. In the example, the sum reducer is implemented with the tf.math.unsorted_segment_sum API, and it can efficiently aggregate the unnormalized attention scores from different numbers of neighbors for each node. The last map operation is easier than the aforementioned operations, and it can be implemented by a simple TensorFlow division function.

2.1.3. Graph Mini-batch Strategy

As with general deep learning models, GNNs can benefit from mini-batch training and inference of graphs. Given a batch of graphs, tf_geometric combine them into a BatchGraph and apply GNNs on it. Since the label information is also combined in the BatchGraph, the combined can directly be used as the node/graph labels of the batch.

The mini-batch strategy in tf_geometric is flexible, and you can mini-batch not only the input graph data, but also the intermediate output graphs. In the mini-batch process, since the operation of combining graphs into a BatchGraph is differentiable, the gradients will pass from the BatchGraph back to the given batch of graphs. Moreover, the mini-batch construction operation is fast enough and can be executed during each forward propagation process.

2.1.4. Distribution Strategy

To take advantage of the powerful distribution ability of TensorFlow, all the GNN models in tf_geometric are implemented as standard TensorFlow models, which can be distributed with minimal code changes on the data processing. The distribution of tf_geometric GNN models can be easily handled by TensorFlow distribution strategies, and the model can be distributed in different ways by choosing different distribution strategies. However, the distribution of graph data cannot be solved by simply applying the distribution strategies. This is because the built-in data sharding mechanism of the distribution strategies are designed for regular tensors, which is not able to deal with irregular graph data. Nonetheless, we can still easily distribute graph data by customizing distributed graph datasets with tf.data.Dataset for distribution strategies. The customizing usually only requires a few small changes on the code for local data processing.

2.1.5. Core OOP APIs and Functional APIs

tf_geometric provides both Object Oriented Programming (OOP) APIs and Functional APIs, with which users can built advanced graph deep learning models:

  • OOP APIs are class-level interfaces for graph neural networks. The GNN classes in tf_geometric are implemented as standard TensorFlow models by subclassing the

    tf.keras.Model

    class, where each GNN class defines how to maintain the model parameters and the computational process. An instance of GNN classes holds the parameters of a GNN model and it can be called as a function to process the input data with the GNN algorithm. OOP APIs are convenient since users can apply them on graph data as black boxes without knowing details of the GNN model, such as the initialization of model parameters and the algorithm. Due to the convenience and customizability of OOP APIs, most popular GNN libraries provide OOP APIs as the main interface for GNNs. However, OOP APIs are insufficient for some advanced tasks, and therefore tf_geometric provides functional APIs to solve the problem, which will be introduced in the next paragraphs.

  • Functional APIs provide function-level interfaces for graph neural networks. Functional APIs are functions that implement GNN operations. Different from OOP APIs, which automatically maintain model parameters in GNN layer instances, functional APIs require users to maintain model parameters outside the GNN functions and use them as the input of GNN functions together with graph data. That is, instead of using fixed tensors as model parameters in OOP APIs, functional APIs can dynamically use different tensors as model parameters for each call. This feature of functional APIs is critical for advanced tasks that require complex maintenance strategies of model parameters, such as graph meta-learning. For example, to implement MAML (Finn et al., 2017) on graphs, a GNN function will be called multiple times with different parameters during each forward propagation. The GNN function is first called with variable tensors as initial parameters and then called multiple times with temporary tensors as updated model parameters. Obviously, functional APIs are elegant solutions for this task, since dynamic parameters are natively supported by functional APIs.

Note that the core OOP APIs and functional APIs in the kernel do not involve the implementation of specific GNNs. Instead, they provide some infrastructures that are essential for implementing specific GNN classes or functions, such as abstract classes and functions for graph map-reduce.

2.2. Implementation of Popular GNN Models

Based on the kernel libraries, tf_geometric implements a variety of popular GNN models for different tasks, including node-level models such as Graph Convolutional Network (GCN) (Kipf and Welling, 2017), Graph Attention Network (GAT) (Velickovic et al., 2018), Simple Graph Convolution (SGC) (Wu et al., 2019a), Approximate Personalized Propagation of Neural Predictions (APPNP) (Klicpera et al., 2019), and Deep Graph Infomax (DGI) (Velickovic et al., 2019), and graph-level models such as Set2Set (Vinyals et al., 2016), SortPool (Zhang et al., 2018), Differentiable Pooling (DiffPool) (Ying et al., 2018), and Self-Attention Graph Pooling (SAGPool) (Lee et al., 2019). To avoid redundancy, all the GNN models in tf_geometric are first implemented as Functional APIs, and the OOP APIs are just wrappers of the corresponding Functional APIs. We carefully implement these models and make sure that they can achieve competitive performance with other implementations.

Besides, tf_geometric also provides demos that reproduce the performance of GNNs reported in the literature. The demos contain the complete code for data loading, training, and evaluation. They are implemented in an elegant way and also act as the style guide for tf_geometric.

2.3. Dataset Management Mechanism

tf_geometric provides customizable dataset APIs and a lot of ready-to-use public benchmark datasets.

2.3.1. Dataset Classes and Dataset Instances

Each dataset has a corresponding dataset class and different instances of a dataset class (dataset instances) can represent different configurations for the same dataset. Each dataset instance can automatically download the raw dataset from the Web and then pre-process it into convenient data formats, which can benefit not only tf_geometric, but also other graph deep learning frameworks. Besides, a caching mechanism is provided by dataset classes, which allow you to only process each raw dataset once and load it from the cache on-the-fly.

Dataset classes are not just simple wrappers of the raw graph datasets, and they may also involve complex feature engineering in the pre-processing. For example, node degrees are frequently used features in graph classification tasks (Wu et al., 2019b). By encapsulating the computation of node degrees in the pre-processing method of dataset classes, users can directly load node degrees as features from the datasets without considering the complex feature engineering process.

2.3.2. Built-in Datasets

The provided datasets, which are also called built-in datasets, consist of lots of public benchmark datasets that are frequently used in graph deep learning research. Moreover, the built-in datasets cover datasets for various graph deep learning tasks, such as node classification and graph classification.

2.3.3. Customizable Datasets

Users can customize their datasets by simply subclassing built-in abstract dataset classes. The built-in abstract dataset classes manage the workflow of dataset processing and already encapsulate the implementation of general processes, such as downloading, file management, and caching. These general processes can be customized by the configuration parameters defined in subclasses, such as the URL of the dataset and whether the pre-processing result should be cached. Since the data pre-processing processes are usually different across different datasets, the data pre-processing is defined as abstract methods in the superclasses and users can implement them for their datasets in the subclasses by overriding the abstract methods.

2.4. Utilities

Some important utilities are required for implementing graph deep learning models. These utilities include tools for common graph data processing, type conversion, graph sampling, etc. The tools are put in the utils module of tf_geometric, and most of them are designed not only for tf_geometric, but also for general graph deep learning implementations.

3. Comparison to Other GNN Libraries

In recent years, several GNN libraries have been developed for different deep learning frameworks. Among them, popular libraries such as PyTorch Geometric (PyG)

444https://github.com/rusty1s/pytorch_geometric (Fey and Lenssen, 2019) and Deep Graph Library (DGL)555https://github.com/dmlc/dgl (Wang et al., 2019) have been widely used by researchers to deal with graph deep learning in different fields. They provide extensible OOP APIs and implement a variety of GNN classes, with which users can easily handle general graph deep learning tasks. As mentioned before, OOP APIs are insufficient for several advanced tasks such as graph meta-learning. Different from these GNN libraries, tf_geometric provide not only OOP APIs, but also Functional APIs, which can be used to deal with advanced graph deep learning tasks. Moreover, popular GNN libraries for TensorFlow such as Spektral 666https://github.com/danielegrattarola/spektral (Grattarola and Alippi, 2021) and StellarGraph 777https://github.com/stellargraph/stellargraph usually only support TensorFlow 2.x, whereas tf_geometric is compatible with both TensorFlow 1.x and 2.x. Furthermore, tf_geometric provides a flexible and friendly caching system that can speed up some GNNs, while existing GNN libraries do not support caching or only support caching for few special cases. For example, PyG only supports layer-level caching for GCN, which means that each PyG GCN layer with caching enabled is bound to a constant graph structure and usually it can only be used for transductive learning tasks on a single graph. Instead, the tf_geometric GCN adopts a graph-level caching mechanism, and it can cache for different graph structures with different GCN normalization configurations.

4. Empirical Evaluation

To provide an overview of how GNN models implemented by tf_geometric perform on common research scenarios, we conduct experiments with several public benchmark datasets on two different tasks.

4.1. Tasks and Evaluation Metrics

We evaluate several tf_geometric models on two different tasks: node classification and graph classification.

Node Classification

We first conduct experiments on a semi-supervised node classification task with three benchmark datasets: Cora, CiteSeer, and Pubmed 

(Sen et al., 2008). We evaluate GCN (Kipf and Welling, 2017), GAT (Velickovic et al., 2018), SGC (Wu et al., 2019a), APPNP (Klicpera et al., 2019), and DGI (Velickovic et al., 2019)

on the task, where GCN, GAT, SGC, APPNP are end-to-end classification models, while DGI is a self-supervised model node representation learning model, where an extra logistic regression model is utilized for classification based on the learned node representations. For the benchmark datasets, we use the same dataset splits as in 

(Kipf and Welling, 2017), where each dataset is split into a train set, a test set, and a validation set. The validation set is used for early stopping and its label information is not used for training. We report the classification accuracy scores on the test set.

Graph Classification We also evaluate tf_geometric on a graph classification task with three benchmark datasets: NCI1, NCI109 (Wale et al., 2008), and PROTEINS (Dobson and Doig, 2003; Borgwardt et al., 2005)

. We evaluate several graph pooling models, including Mean-Max Pool, Set2Set 

(Vinyals et al., 2016), SortPool (Zhang et al., 2018), DiffPool (Ying et al., 2018), and SAGPool (Lee et al., 2019). The Mean-Max Pool is a naive graph pooling model, which obtains graph representations by concatenating the mean pooling and max pooling results of GCNs. These classification accuracy scores of these models are evaluated on three benchmark datasets using 10-fold cross-validation, where a training fold is randomly sampled as the validation set. As with the node classification task, the validation set is only used for early stopping. The architectures of graph pooling models are complex, and they may involve components other than the core graph pooling layers. Some of these components are model-agnostic and can be utilized by some other GNN models to obtain better performance. For example, the hierarchical graph pooling models may benefit from the mean-max pooling on both hidden and output layers, whereas the official implements may only consider using mean pooling. Therefore, we update the architectures for some models for a fair comparison.

ModelDataset Cora CiteSeer Pubmed
GCN
GAT
SGC
APPNP
DGI
Table 1. Performance on Node Classification Tasks.
ModelDataset NCI1 NCI109 PROTEINS
Mean-Max Pool
Set2Set
SortPool
DiffPool
SAGPool
Table 2. Performance on Graph Classification Tasks.

4.2. Performance

The model performance on node classification is reported in Table 1. The results show that the GNN models provided by tf_geometric can achieve competitive performance with the official implementations. Particularly, although tf_geometric adopts a Transformer-based GAT algorithm rather than the official version, it still obtains almost the same accuracy scores reported in (Velickovic et al., 2018).

For the graph classification task, the results are listed in Table 2. Since the architectures of some models are adjusted, the model performance is sometimes better than that reported in the literature. Note that by optimizing the architecture, the naive Mean-Max Pool outperforms some other graph pooling models in some cases.

5. Conclusions

We introduce tf_geometric, an efficient and friendly library for graph deep learning, which is compatible with both TensorFlow 1.x and 2.x. tf_geometric provides kernel libraries for building graph neural networks as well as implementations of popular GNNs. In particular, the kernel libraries consist of infrastructures for building efficient GNNs, which enable tf_geometric to support single-graph computation, multi-graph computation, graph mini-batch, distributed training, etc. Therefore, tf_geometric can be used for a variety of graph deep learning tasks, such as transductive node classification, inductive node classification, link prediction, and graph classification. tf_geometric exposes both OOP APIs and Functional APIs, with which users can deal with advanced graph deep learning tasks. Moreover, the APIs are friendly, and they are suitable for both beginners and experts. We are actively working to further optimize the kernel libraries and integrate more existing GNN models for tf_geometric. In the future, we will keep tf_geometric up-to-date with the latest research findings of GNNs and continually integrate future models into it.

References

  • J. Bastings, I. Titov, W. Aziz, D. Marcheggiani, and K. Sima’an (2017) Graph convolutional encoders for syntax-aware neural machine translation. In

    Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017

    ,
    pp. 1957–1967. Cited by: §1.
  • K. Borgwardt, C. S. Ong, S. Schönauer, S. Vishwanathan, A. Smola, and H. Kriegel (2005) Protein function prediction via graph kernels. Bioinformatics 21 Suppl 1, pp. i47–56. Cited by: §4.1.
  • Z. Cui, K. Henrickson, R. Ke, and Y. Wang (2020)

    Traffic graph convolutional recurrent neural network: A deep learning framework for network-scale traffic learning and forecasting

    .
    IEEE Trans. Intell. Transp. Syst. 21 (11), pp. 4883–4894. Cited by: §1.
  • P. Dobson and A. Doig (2003) Distinguishing enzyme structures from non-enzymes without alignments.. Journal of molecular biology 330 4, pp. 771–83. Cited by: §4.1.
  • W. Fan, Y. Ma, Q. Li, Y. He, Y. E. Zhao, J. Tang, and D. Yin (2019) Graph neural networks for social recommendation. In The World Wide Web Conference, WWW 2019, San Francisco, CA, USA, May 13-17, 2019, pp. 417–426. Cited by: §1.
  • Y. Fang, S. Sun, Z. Gan, R. Pillai, S. Wang, and J. Liu (2020) Hierarchical graph network for multi-hop question answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pp. 8823–8838. Cited by: §1.
  • M. Fey and J. E. Lenssen (2019) Fast graph representation learning with pytorch geometric. CoRR abs/1903.02428. External Links: Link Cited by: §3.
  • C. Finn, P. Abbeel, and S. Levine (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In

    Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017

    ,
    Proceedings of Machine Learning Research, Vol. 70, pp. 1126–1135. Cited by: 2nd item.
  • A. Fout, J. Byrd, B. Shariat, and A. Ben-Hur (2017) Protein interface prediction using graph convolutional networks. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pp. 6530–6539. Cited by: §1.
  • T. Gaudelet, B. Day, A. R. Jamasb, J. Soman, C. Regep, G. Liu, J. B. R. Hayter, R. Vickers, C. Roberts, J. Tang, D. Roblin, T. L. Blundell, M. M. Bronstein, and J. P. Taylor-King (2020) Utilising graph machine learning within drug discovery and development. CoRR abs/2012.05716. External Links: Link Cited by: §1.
  • D. Grattarola and C. Alippi (2021) Graph neural networks in tensorflow and keras with spektral [application notes]. IEEE Comput. Intell. Mag. 16 (1), pp. 99–106. Cited by: §3.
  • J. Hu, S. Qian, Q. Fang, and C. Xu (2019) Hierarchical graph semantic pooling network for multi-modal community question answer matching. In Proceedings of the 27th ACM International Conference on Multimedia, MM 2019, Nice, France, October 21-25, 2019, pp. 1157–1165. Cited by: §1.
  • T. N. Kipf and M. Welling (2017) Semi-supervised classification with graph convolutional networks. In 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, April 24-26, 2017, Conference Track Proceedings, Cited by: §2.2, §4.1.
  • J. Klicpera, A. Bojchevski, and S. Günnemann (2019) Predict then propagate: graph neural networks meet personalized pagerank. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, Cited by: §2.2, §4.1.
  • J. Lee, I. Lee, and J. Kang (2019) Self-attention graph pooling. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, Proceedings of Machine Learning Research, Vol. 97, pp. 3734–3743. Cited by: §1, §2.2, §4.1.
  • L. Li, Z. Gan, Y. Cheng, and J. Liu (2019) Relation-aware graph attention network for visual question answering. In

    2019 IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 - November 2, 2019

    ,
    pp. 10312–10321. Cited by: §1.
  • Y. Li, B. Qian, X. Zhang, and H. Liu (2020) Graph neural network-based diagnosis prediction. Big Data 8 (5), pp. 379–390. Cited by: §1.
  • D. Marcheggiani, J. Bastings, and I. Titov (2018) Exploiting semantics in neural machine translation with graph convolutional networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Louisiana, USA, June 1-6, 2018, Volume 2 (Short Papers), pp. 486–492. Cited by: §1.
  • T. Pfaff, M. Fortunato, A. Sanchez-Gonzalez, and P. Battaglia (2021) Learning mesh-based simulation with graph networks. In International Conference on Learning Representations, External Links: Link Cited by: §1.
  • E. Ranjan, S. Sanyal, and P. P. Talukdar (2020) ASAP: adaptive structure aware pooling for learning hierarchical graph representations. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pp. 5470–5477. Cited by: §1.
  • V. R. Sampathkumar (2021) ADiag: graph neural network based diagnosis of alzheimer’s disease. CoRR abs/2101.02870. External Links: Link Cited by: §1.
  • A. Sanchez-Gonzalez, J. Godwin, T. Pfaff, R. Ying, J. Leskovec, and P. W. Battaglia (2020) Learning to simulate complex physics with graph networks. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, Proceedings of Machine Learning Research, Vol. 119, pp. 8459–8468. Cited by: §1.
  • P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Gallagher, and T. Eliassi-Rad (2008) Collective classification in network data. AI Mag. 29 (3), pp. 93–106. Cited by: §4.1.
  • Q. Tan, N. Liu, X. Zhao, H. Yang, J. Zhou, and X. Hu (2020) Learning to hash with graph neural networks for recommender systems. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pp. 1988–1998. Cited by: §1.
  • P. Velickovic, G. Cucurull, A. Casanova, A. Romero, P. Liò, and Y. Bengio (2018) Graph attention networks. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings, Cited by: §2.2, §4.1, §4.2.
  • P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. Hjelm (2019) Deep graph infomax. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019, Cited by: §2.2, §4.1.
  • O. Vinyals, S. Bengio, and M. Kudlur (2016) Order matters: sequence to sequence for sets. In 4th International Conference on Learning Representations, ICLR 2016, San Juan, Puerto Rico, May 2-4, 2016, Conference Track Proceedings, External Links: Link Cited by: §2.2, §4.1.
  • N. Wale, I. A. Watson, and G. Karypis (2008) Comparison of descriptor spaces for chemical compound retrieval and classification. Knowl. Inf. Syst. 14 (3), pp. 347–375. Cited by: §4.1.
  • M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, H. Zhang, H. Lin, J. Zhao, J. Li, A. J. Smola, and Z. Zhang (2019) Deep graph library: towards efficient and scalable deep learning on graphs. CoRR abs/1909.01315. External Links: Link Cited by: §3.
  • W. Wang, W. Zhang, S. Liu, Q. Liu, B. Zhang, L. Lin, and H. Zha (2020a) Beyond clicks: modeling multi-relational item graph for session-based target behavior prediction. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pp. 3056–3062. Cited by: §1.
  • X. Wang, Y. Ma, Y. Wang, W. Jin, X. Wang, J. Tang, C. Jia, and J. Yu (2020b) Traffic flow prediction via spatial temporal graph neural network. In WWW ’20: The Web Conference 2020, Taipei, Taiwan, April 20-24, 2020, pp. 1082–1092. Cited by: §1.
  • F. Wu, A. H. S. Jr., T. Zhang, C. Fifty, T. Yu, and K. Q. Weinberger (2019a) Simplifying graph convolutional networks. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, Proceedings of Machine Learning Research, Vol. 97, pp. 6861–6871. Cited by: §2.2, §4.1.
  • J. Wu, J. He, and J. Xu (2019b) DEMO-net: degree-specific graph neural networks for node and graph classification. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD 2019, Anchorage, AK, USA, August 4-8, 2019, pp. 406–415. Cited by: §2.3.1.
  • Z. Ying, J. You, C. Morris, X. Ren, W. L. Hamilton, and J. Leskovec (2018) Hierarchical graph representation learning with differentiable pooling. In Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, December 3-8, 2018, Montréal, Canada, pp. 4805–4815. Cited by: §1, §2.2, §4.1.
  • M. Zhang, Z. Cui, M. Neumann, and Y. Chen (2018) An end-to-end deep learning architecture for graph classification. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pp. 4438–4445. Cited by: §2.2, §4.1.