Attacking Graph Convolutional Networks via Rewiring

06/10/2019 ∙ by Yao Ma, et al. ∙ Penn State University ibm Michigan State University 0

Graph Neural Networks (GNNs) have boosted the performance of many graph related tasks such as node classification and graph classification. Recent researches show that graph neural networks are vulnerable to adversarial attacks, which deliberately add carefully created unnoticeable perturbation to the graph structure. The perturbation is usually created by adding/deleting a few edges, which might be noticeable even when the number of edges modified is small. In this paper, we propose a graph rewiring operation which affects the graph in a less noticeable way compared to adding/deleting edges. We then use reinforcement learning to learn the attack strategy based on the proposed rewiring operation. Experiments on real world graphs demonstrate the effectiveness of the proposed framework. To understand the proposed framework, we further analyze how its generated perturbation to the graph structure affects the output of the target model.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Graph structured data are ubiquitous in many real world applications. Various data from different domains, such as social networks, molecular graphs and transportation networks can all be modeled as graphs. Recently, increasing effort has been devoted towards developing deep neural networks on graph structured data. This stream of works, which is known as Graph Neural Networks (GNN)  has shown to enhance the performance in many graph related tasks such as node classification (Kipf and Welling, 2016; Hamilton et al., 2017) and graph classification (Bruna et al., 2013; Defferrard et al., 2016; Ying et al., 2018; Zhang et al., 2018).

Recent researches have shown that deep neural networks are highly vulnerable to adversarial attacks (Szegedy et al., 2013; Goodfellow et al., 2014; Kurakin et al., 2016; Carlini and Wagner, 2017)

. In computer vision, performing an adversarial attack is to add deliberately created, but unnoticeable, perturbation to a given image such that the deep model misclassifies the perturbed image. Unlike image data, which can be represented in the continuous space, graph structured data is discrete. Few efforts have been made to investigate the robustness of graph neural networks against adversarial attacks. Only recently, such researches about adversarial attacks on graph structured data started to emerge.  

Zügner et al. (2018) proposed a greedy algorithm to attack the semi-supervised node classification task. Their method deliberately tries to modify the graph structure and node features such that the label of a targeted node can be changed. Dai et al. (2018) proposed a reinforcement learning based algorithm to attack both node classification and graph classification task by only modifying the graph structure. Zügner and Günnemann (2019) designed a meta-learning based attack method to impair the overall performance of the node classification task. In these aforementioned works, the graph structure is modified by adding or deleting edges.

To ensure the difference between the attacked graph and the original graph is “unnoticeable”, the number of actions (adding/deleting edges) that can be taken by the attacking algorithms is usually constrained by a budget. However, even when this budget is small, adding or deleting edges can still make “noticeable” changes to the graph structure. For example, it is evident that many important graph properties are based on eigenvalues and eigenvectors of the Laplacian matrix of the graph 

(Chan and Akoglu, 2016); while adding or deleting an edge can make remarkable changes on the eigenvalues/eigenvectors of the graph Laplacian Ghosh and Boyd (2006). Thus, in this work, we propose a new operation based on graph rewiring. A single rewiring operation involves three nodes , where we remove the existing edge between and and add edge between and . Note that is constraint to be the 2-hop neighbor of in our setting. It is obvious that the proposed rewiring operation preserves some basic properties of the graph such as number nodes and edges, total degrees of the graph and etc, while operations like adding and deleting edges cannot. Furthermore, the proposed rewiring operation affects some of the important measures based on graph Laplacian such as algebraic connectivity in a smaller way than adding/deleting edges, which we will theoretically show in Section 4.1. In addition, the rewiring operation is a more natural way to modify the graph. For example, in biology, the evolution of DNA and amino acid sequences could lead to pervasive rewiring of protein–protein interactions (Zitnik et al., 2019).

In this paper, we aim to construct adversarial examples by performing rewiring operations for the task of graph classification. More specifically, we treat the process of applying a series of rewiring operations to a given graph as a discrete Markov decision process (MDP) and use reinforcement learning to learn how to make these decisions. We demonstrate the effectiveness of the proposed algorithm on real-world graphs and further analyze how the adversarial changes in the graph structure affect both the graph embedding learned by the graph neural network model and the output label.

2 Background

In this section, we introduce notations and the target graph convolutional model we seek to attack. We denote a graph as , where and are the sets of nodes and edges, respectively. The edges describe the relations between nodes, which can be described by an adjacency matrix . means and are connected, 0 otherwise. Each node in the graph has some features that are associated with it. These features are represented as a matrix , where the -th row of denotes the node features of node and is the dimension of features. Thus, an attributed graph can be represented as .

2.1 Graph Classification

In the setting of graph classification, we are given a set of graphs . Each of these graphs is associated with a label

. The task is to build a good classifier using the given set of graphs such that it can make correct predictions when new unseen graphs are fed into it. A graph classifier parameterized by

can be represented as , where denotes the label of a graph predicted by the classifier. The parameters in the classifier can be learned by solving the following optimization problem , where is used to measure the difference between the predicted and ground truth labels. Cross entropy is a commonly adopted measurement for .

2.2 Graph Convolution Networks

Recently, Graph Neural Networks have been shown to be effective in graph representation learning. These models usually learn node representations by iteratively aggregating, transforming and propagating node information. In this work, we adopt the graph convolutional networks (GCN) (Kipf and Welling, 2016). A graph convolutional layer in the GCN framework can be represented as

(1)

where is the output of the -th layer and represents the parameters of this layer. A GCN model usually consists of graph convolutional layers, with . The output of the GCN model is , which is denote as for convenience. To obtain a graph level embedding for graph to perform graph classification, we apply a global pooling over the node embeddings.

(2)

Different global pooling functions can be used, and we adopt the max pooling in this work. A multilayer perceptron (MLP) and softmax layer are then sequentially applied on the graph embedding to predict the label of the graph

(3)

where denotes the multilayer perceptron with parameters as . A GCN-based classifier for graph classification can be described using eq. (1), (2) and (3) as introduced above. For simplicity, we summarize it as , where includes all the parameters in the model.

3 Problem Formulation

In this work, we aim to build an attacker that takes a graph as input and modify the structure of the graph to fool a GCN classifier. Modifying a graph structure is equivalent to modify its adjacency matrix. The function of the attacker can be represented as follows

(4)

Given a classifier , the goal of the attacker is to modify the graph structure so that the classifier outputs a different label from what it originally predicted. Note here, we neglect the inside , as the classifier is already trained and fixed. Mathematically, the goal of the attacker can be represented as: .

As described above, in fact, the attacker is specifically designed for a given classifier . To reflect this in the notation, we now denote the attacker for the classifier as . In our work, the attacker has limited knowledge of the classifier. The only information the attacker can get from the classifier is the label of (modified) graphs. In other words, the classifier is treated as a black-box model for the attacker .

An important constraint to the attacker is that it is only allowed to make “unnoticeble” changes to the graph structure. To account for this, we propose the rewiring operation, which is supposed to make more subtle changes than adding or deleting edges. We will show that the rewiring operation can better preserve a lot of important properties of the graph compared to adding or deleting edges in Section 4.1. The definition of the proposed rewiring is given below:

Definition 1.

A rewiring operation involves three nodes and it can be denoted as , where and . denotes the -th hop neighbors of and the sign stands for exclusion. The rewiring operation deletes the existing edge between nodes and , while adding an edge to connect nodes and .

The attacker is given a budget of proposed rewiring operations to modify the graph structure. A straightforward way to set is choosing a small fix number. However, it is likely that graphs in a given data set have various graph sizes. The same number of rewiring operations can affect the graphs of different size in various magnitude. Thus, it may not be appropriate to use the same for all the graphs. A more suitable way is to allow flexible number of rewiring operations according to the graph size. Thus, we propose to use for a given graph , where is a ratio.

The process of the attacker on a graph can be now denoted as:

(5)

where the right hand part means to sequentially apply the rewiring operations to the graph , and is the number of rewiring operations taken with .

4 Rewiring-based Attack to Graph Convolutional Networks

Next, we first discuss the properties of proposed rewiring operation to show its advantages. We then introduce the proposed attacking framework ReWatt based on reinforcement learning and rewiring.

4.1 Properties of the Proposed Rewiring Operation

The proposed rewiring operation has several advantages compared to simply adding or deleting edges. One obvious advantage of the proposed rewiring operation is that it does not change the number of nodes, the number of edges and the total degree of a graph. However, operations like “adding” or “deleting” edges may change those properties.

Many important graph properties are based on the eigenvalues of the Laplacian matrix of a graph (Chan and Akoglu, 2016) such as Algebraic Connectivity Fiedler (1973) and Effective Graph Resistance Ellens et al. (2011). A detailed description of Algebraic Connectivity and Effective Graph Resistance are given in Appendix . Next, we demonstrate that the proposed rewiring operation is likely to make smaller changes to eigenvalues, which result in unnoticeable changes under graph Laplacian based measures. For a graph with as its adjacency matrix, its Laplacian matrix is defined as , where is the diagonal degree matrix (Mohar et al., 1991). Let denote the eigenvalues of the Laplacian matrix arranged in the increasing order with being the corresponding eigenvectors. We show how a single proposed rewiring operation affects the eigenvalues. Our analysis is based on the following lemma:

Lemma 1.

(Stewart, 1990) Let be the eigen-pairs of a symmetric matrix . Given a perturbation to matrix , its eigenvalues can be updated by

The proof can be found in (Stewart, 1990). Using this lemma, we have the following corollary

Corollary 1.

For a given graph with Laplacian matrix , one proposed rewiring operation affects the eigen-value by , for , where

(6)

where denotes the -th value of the eigenvector .

The proof can be found in Appendix B (in the supplementary file).

Furthermore, each eigenvalue of the Laplacian matrix measures the “smoothness” of its corresponding eigenvector  (Shuman et al., 2012; Sandryhaila and Moura, 2014). The “smoothness” of an eigenvector measures how different its elements are from their neighboring nodes. Thus, the first few eigenvectors with relatively small eigenvalues are rather “smooth”. Note that in the proposed rewiring operation, is the direct neighbor of and is the -hop neighbor of . Thus, the difference is expected to be smaller than the difference , where can be any other node that is further away. This means that the proposed rewiring operation (to -hop neighbors) is likely to make smaller changes to the first few eigenvalues than rewiring to any further away nodes or adding an edge between two nodes that are far away from each other.

4.2 Graph Adversarial Attack with Reinforcement Learning

Given a graph , the process of the attacker is a general decision making process , where is the set of actions, which consists of all valid rewiring operations, is the set of states that consists of all possible intermediate and final graphs after rewiring, is the transition dynamics that describes how a rewiring action changes the graph structure . is the reward function, which gives the reward for the action taken at a given state. Thus, the procedure of attacking a graph can be described by a trajectory , where . The key point for the attacker is to learn how to make the decision of picking a suitable rewiring action when at the state

. This can be done by learning a policy network to get the probability

and sample the rewiring operation correspondingly. Modelling in this way, the decision making at a state is dependant on all its previous states, which could be difficult to model due to the long-term dependency. It is easy to notice that the intermediate states are all predicted to have the same label as the original graph. Thus, we can treat each of the states as a brand new graph to be attacked regardless of what leads to it. That is to say, the decision making at the state can be solely dependant on the current state, . Thus, we model the process of attack as a Markov Decision Process (MDP) Sutton and Barto (2018). Hence, we adopt reinforcement learning to learn how to make effective decisions. We name the proposed framework as ReWatt. The key elements of the environment for the reinforcement learning are defined as follows:

State Space The state space of the environment consists of all the intermediate graphs generated after all the possible rewiring operations. Action Space The action space consists of all the valid rewiring operations as defined in Definition 1. Note that the valid action space is dynamic when the state changes, as the -th hop neighbors are different in different states. State Transition Dynamics Given an action (rewiring operation) at state . The next state is achieved by deleting the edge between and in the current state and adding an edge to connect with . Reward Design The main goal of the attacker is to make the classifier predict a different label than originally predicted. We also encourage the attacker to take as few actions as possible so that the modification to the graph structure is minimal. Thus, we assign a positive reward when the attack is successful and assign a negative reward for each action step taken. The reward is given as

where is the negative reward to penalize each step taken. Similar to how we set a flexible rewiring budget , here we propose to use , which depends on the size of the graph.Termination The attack process will stop either when the number of actions reaches the budget or the attacker successfully “changed” the label of the slightly modified graph.

4.3 Policy Network

In this subsection, we introduce the policy network to learn the policy on top of the graph representations learned by GCN. However, this GCN is different from the target classifier one, since it has convectional layers. To choose a valid proposed rewiring action, we decompose the rewiring action to steps: 1) choosing an edge from the set of edges of the intermediate graph ; 2) determining or to be and the other to be ; and 3) choosing the third node from . Correspondingly, we decompose as follows

(7)

We design three policy networks based on GCN to estimate the three distributions in the right hand of the equation (

7), which will be introduced next. To select an edge from the edge set , we generate the edge representation from the node representations learned by GCN. For an edge , the edge representation can be represented as , where is the graph representation of the state , is a function to combine the two node representations and denotes the concatenation operation. We include in the representation of the edge to incorporate the graph information when making the decision. The representation of all the edges in can be represented as a matrix

, where each row represents an edge. The probability distribution over all the edges can be represented as

(8)

where we use to denote a Multilayer Perceptron that maps

to a vector in

, which, after going through the softmax layer, represents the probability of choosing each edge. Let denote the edge sampled according to eq. (8). To decide which node is going to be the first node, we estimate the probability distribution over these two nodes as

(9)

where for . The first node can be sampled from the two nodes according to eq. (9). We then proceed to estimate the probability distribution . For any node , we use to represent it. The representations for all the nodes in can be represented by a matrix with each row representing a node. The probability distribution of choosing the third node over all the candidate nodes can be modeled as:

(10)

The third node can be sampled from the set of candidate nodes according to the probability distribution in eq (10). An action can be generated by sequentially estimating and sampling from the probability distributions in eq. (8), (9) and (10).

Figure 1: The overall framework of ReWatt

4.4 Proposed Framework - ReWatt

With the rewiring and the policy network defined above, our overall framework is shown in Figure 1. With State , the Attacker uses GCN to learn node and edge embeddings, which are used as input to Policy Networks to make decision about the next action. Once the new action is sampled from the policy network, rewiring is performed on and we arrive in the new state . We query the black-box classifier to get the prediction , which is compared with to get reward. Policy gradient (Sutton and Barto, 2018) is adopted to learn the policies by maximizing the rewards.

5 Experiment

In this section, we conduct experiments to evaluate the performance of the proposed framework ReWatt. We also carry out a study to analyze how the trained attacker works.

5.1 Attack Performance

To demonstrate the effectiveness of ReWatt, we conduct experiments on three widely used social network data sets (Kersting et al., 2016) for graph classification, i.e., REDDIT-MULTI-12K, REDDIT-MULTI-5K and IMDB-MULTI (Yanardag and Vishwanathan, 2015). The statistics can be found in Appendix C (in the supplementary file).

In this work, the classifier we target to attack is the GCN-based classifier as introduced in Section 2. We set the number of layers to and use max-pooling as the pooling function to get the graph representation. Note that we need to train the classifier using a fraction of the data and then treat the classifier as a black box to be attacked. We then use part of the remaining data to train the attacker and use the rest of the data to test the performance of the attacker. Thus, for each data set, we split it into three parts with the ratio of , where of the data set is used to train the classifier, of the data set is used to train the attacker and the remaining of the data set is used to test the performance of the attacker. For the REDDIT-MULTI-12K and REDDIT-MULTI-5K data sets, we set , and . As the size of the IMDB-MULTI data set is quite small, to have enough data for testing, we set , and .

We compare the attacking performance of the proposed framework with the RL-S2V proposed in (Dai et al., 2018), random selection method and some variants of our proposed framework. We briefly describe these baselines: 1) RL-S2V is a reinforcement learning based attack framework (Dai et al., 2018), which allows adding and deleting edges to the graph with a fixed budget for all the graphs; 2) Random denotes an attacker that performs the proposed rewiring operations randomly; 3) Random-s is also based on random rewiring. Note that ReWatt can terminate before using all the budget. We record the actual number of rewiring actions made in our method and only allow the Random-s to take exactly the same number of rewiring actions as ReWatt; 4) ReWatt-n denotes a variant of the ReWatt, where the negative reward is fixed to for all the graphs in the testing set; and 5) ReWatt-a is a variant of ReWatt, where we allow any nodes in the graph to be the third node instead of only -hop neighbors.

As RL-S2V only allows a fixed budget for the all the graphs, when comparing to it, for ReWatt, we also fix the number of proposed rewiring operations to a fixed number for all the graphs. Note that a single proposed rewiring operation involves two edges, thus, for a fair comparison, we allow the RL-S2V to take actions (adding/deleting edges). We set in the experiments. To compare with the random selection method and the variants of ReWatt, we use flexible budget, more specially, we allow at most proposed rewiring operations for graph . Here, is a fixed percentage and we set it to in our experiments. We use the success rate as measure to evaluate the performance of the attacker. A graph is said to be successfully attacked if its label is changed when it is modified within the given budget.


REDDIT-MULTI-12K REDDIT-MULTI-5K IMDB-MULTI
K 1 2 3 1 2 3 1 2 3
ReWatt 14.4% 21.6% 23.4% 8.99% 16.9% 18.0% 23.0% 23.3% 23.3%
RL-S2V 9.46% 18.5% 21.1% 4.49% 16.9% 18.0% 2.00% 6.00% 3.33%
p 1% 2% 3% 1% 2% 3% 1% 2% 3%
ReWatt 25.2% 32.9% 38.7% 11.2% 20.2% 27.0% 23.0% 23.0% 23.3%
ReWatt-a 26.1% 35.1% 42.8% 5.60% 21.3% 30.3% 24.3% 25.0% 25.6%
ReWatt-n 17.6% 25.7% 31.1% 5.60% 14.6% 19.1% 21.3% 21.3% 21.6%
random 10.3% 15.7% 21.6% 3.30% 12.4% 16.9% 1.33% 1.33% 1.66%
random-s 6.30% 6.70% 9.45% 5.60% 6.74% 11.0% 1.00% 1.33% 1.66%
Table 1: Performance comparison in terms of success rate

The results are shown in Table 1. From the table, we can make the following observations: 1) Compared to RL-S2V, ReWatt can perform more effective attacks. Especially, in the IMDB-MULTI data set, where ReWatt outperforms the RL-S2V with a large margin; 2) ReWatt outperforms the Random method as expected. Especially, ReWatt is much more effective than Random-s which performs exactly the same number of proposed rewiring operations ReWatt. This also indicates that the Random method uses more rewiring operations for successful attacking than ReWatt; 3) The variant ReWatt-a outperforms ReWatt, which means if we do not constraint the rewiring operation to 2-hop neighbors, the performance of ReWatt can be further improved. However, as we discussed in earlier sections, this may lead to more “noticeable” changes of the graph structure; and 4) ReWatt-n performs worse than our ReWatt, which shows the advancement of using a flexible reward design.

5.2 Attacker Analysis

In this subsection, we carry out experiments to analyze how ReWatt’s change in graph structure affects the graph representation calculated by eq. (2

) and the logits

(the output immediately after the softmax layer of the classifier). For convenience, we denote the original graph as and the attacked graph as in this section. Correspondingly, the graph representation and logits for the original (attacked) graph are denoted as () and (), respectively. To measure the difference in graph representation, we used the relative difference in terms of -norm defined as . The logits denote the probability distribution that the given graph belongs to each of the classes. Thus, we use the KL-divergence Kullback (1997) to measure the difference between the logits of the original and attacked graphs , where is the number of classes in the data set and denotes the logit for the -th class.

We perform the experiments on the REDDIR-MULTI-12K data set under the setting of allowing at most proposed rewiring operations. The results for the graph representation and logits are shown in Figure 2 and Figure 3, respectively. The graphs in the testing set are separated in two groups, one group contains all the graphs successfully attacked by ReWatt (shown in Figure 1(a) and Figure 2(a)), and the other one contains those survived from ReWatt’s attack (shown in Figure 1(b) and Figure 2(b)). Note that, for comparison, we also include the results of Random-s on these two groups of graphs. In these figures, a single point represents a testing graph, the x-axis is the ratio , where is the number of rewiring operations ReWatt used before the attacking process terminating. Note that can be smaller than the budget as the process terminates once the attack successes.

As we can observed from the figures, compared with the Random-s, ReWatt can make more changes to both the graph representation and logits, using exactly the same number of proposed rewiring operations. Comparing Figure 1(a) with Figure 1(b), we find that the perturbation generated by ReWatt affects the graph representation a lot even when it fails to attack the graph. This means our attack is perturbing the graph structure in a right way to fool the classifier, although it fails potentially due to the limited budget. Similar observation can be made when we compare Figure 2(a) with Figure 2(b).

(a) Succeeded graphs
(b) Failed graphs
Figure 2: The change of graph representation after attack
(a) Succeeded graphs
(b) Failed graphs
Figure 3: The change of logits after attack

6 Related Work

In recent years, adversarial attacks on deep learning models have attracted increasing attention in the area of computer vision. Many deep models are found to be easily fooled by adversarial samples, which are generated by adding deliberately designed unnoticeable perturbation to normal images 

(Szegedy et al., 2013; Goodfellow et al., 2014). More algorithms with different level access to the target classifier have been proposed, including white-box attack models, which have access to the gradients (Moosavi-Dezfooli et al., 2016; Kurakin et al., 2016; Carlini and Wagner, 2017) and black-box attack model, which have limited access to the target classifier (Chen et al., 2017; Cheng et al., 2018; Ilyas et al., 2018).

Most of the aforementioned works are focusing in the computer vision domain, where the data sample can be represented in the continues space. Few attention has been payed into the discrete data structure such as graphs. Graph Neural Networks have been shown to bring impressive advancements to many different graph related tasks such as node classification and graph classification. Recent researches show that the graph neural networks are also venerable to adversarial attacks. (Zügner et al., 2018) proposed a greedy algorithm to perform adversarial attack to node classification task. Their algorithm tries to change the label of a target node by modifying both the graph structure and node features. (Dai et al., 2018) proposed a deep reinforcement learning based attacker to attack both the node classification and the graph classification task. (Zügner and Günnemann, 2019) designed an algorithm to impair the overall performance of node classification based on meta learning. All the three mentioned methods modify the graph structure by adding or deleting edges. A more recent work Wang et al. (2018) on attacking node classifications proposed to modify the graph structure by adding fake nodes. In this work, we propose to modify the graph structure using rewiring, which is shown to make less noticeable changes to the graph structure.

7 Conclusion

In this paper, we proposed a graph rewiring operation, which affect the graph structure in a less noticeable way than adding/deleting edges. The rewiring operation preserves some basic graph properties such as number of nodes and number of edges. We then designed an attacker ReWatt based on the rewiring operations using reinforcement learning. Experiments in real world data sets show the effectiveness of the proposed framework. Analysis on how the graph representation and logits change while the graph being attacked provide us with some insights of the attacker.

References

Appendix A Graph Laplacian Based Measures

Many important graph properties are based on the eigenvalues of the Laplacian matrix of a graph [Chan and Akoglu, 2016]. Here we list few:

  • [leftmargin=*]

  • Algebraic Connectivity The algebraic connectivity of a graph is the second-smallest eigenvalue of its Laplacian matrix [Fiedler, 1973]. Note that we only consider connected graphs in this work, so it is always larger than . The larger the algebraic connectivity is, the more difficult it is to separate the graph into components (i.e., more edges need to be removed). The algebraic connectivity has previously been applied to measure network robustness Sydney et al. [2013].

  • Effective Graph Resistance The effective graph resistance is a graph measure derived from the field of electric circuit analysis, where it is defined as the summation of effective resistance over all node pairs [Ellens et al., 2011]. The effective graph resistance can be represented using the eigenvalues of Laplacian matrix as follows [Ellens et al., 2011]

    (11)

By Corollary 2, we can represent the change of the algebraic connectivity as:

(12)

According to the above discussion, is expected to be smaller for the operation of rewiring to -hop neighbor. Thus, the rewiring to -hop neighbor operation is expected to perturb the algebraic connectivity less compared with adding an edge between two nodes that are far away from each other. A similar argument can be built for effective graph resistance.

Appendix B Proof of Collary 1

Corollary 2.

For a given graph with Laplacian matrix , one proposed rewiring operation affects the eigen-value by , for , where

(13)

where denotes the -th value of the eigenvector .

Proof.

Let denotes the change in the Laplacian matrix after applying the rewiring operation to graph . Then we have , , , and elsewhere. Thus

which completes the proof. ∎

Appendix C Statistics of the Datasets

The statistics of the datasets are given in Table 2.

# graphs # labels
REDDIT-MULTI-12K 11,929 12
REDDIT-MULTI-5K 4,999 5
IMDB-MULTI 1,500 3
Table 2: Statistics of the data sets