Graph structured data are ubiquitous in many real world applications. Various data from different domains, such as social networks, molecular graphs and transportation networks can all be modeled as graphs. Recently, increasing effort has been devoted towards developing deep neural networks on graph structured data. This stream of works, which is known as Graph Neural Networks (GNN) has shown to enhance the performance in many graph related tasks such as node classification (Kipf and Welling, 2016; Hamilton et al., 2017) and graph classification (Bruna et al., 2013; Defferrard et al., 2016; Ying et al., 2018; Zhang et al., 2018).
. In computer vision, performing an adversarial attack is to add deliberately created, but unnoticeable, perturbation to a given image such that the deep model misclassifies the perturbed image. Unlike image data, which can be represented in the continuous space, graph structured data is discrete. Few efforts have been made to investigate the robustness of graph neural networks against adversarial attacks. Only recently, such researches about adversarial attacks on graph structured data started to emerge.Zügner et al. (2018) proposed a greedy algorithm to attack the semi-supervised node classification task. Their method deliberately tries to modify the graph structure and node features such that the label of a targeted node can be changed. Dai et al. (2018) proposed a reinforcement learning based algorithm to attack both node classification and graph classification task by only modifying the graph structure. Zügner and Günnemann (2019) designed a meta-learning based attack method to impair the overall performance of the node classification task. In these aforementioned works, the graph structure is modified by adding or deleting edges.
To ensure the difference between the attacked graph and the original graph is “unnoticeable”, the number of actions (adding/deleting edges) that can be taken by the attacking algorithms is usually constrained by a budget. However, even when this budget is small, adding or deleting edges can still make “noticeable” changes to the graph structure. For example, it is evident that many important graph properties are based on eigenvalues and eigenvectors of the Laplacian matrix of the graph(Chan and Akoglu, 2016); while adding or deleting an edge can make remarkable changes on the eigenvalues/eigenvectors of the graph Laplacian Ghosh and Boyd (2006). Thus, in this work, we propose a new operation based on graph rewiring. A single rewiring operation involves three nodes , where we remove the existing edge between and and add edge between and . Note that is constraint to be the 2-hop neighbor of in our setting. It is obvious that the proposed rewiring operation preserves some basic properties of the graph such as number nodes and edges, total degrees of the graph and etc, while operations like adding and deleting edges cannot. Furthermore, the proposed rewiring operation affects some of the important measures based on graph Laplacian such as algebraic connectivity in a smaller way than adding/deleting edges, which we will theoretically show in Section 4.1. In addition, the rewiring operation is a more natural way to modify the graph. For example, in biology, the evolution of DNA and amino acid sequences could lead to pervasive rewiring of protein–protein interactions (Zitnik et al., 2019).
In this paper, we aim to construct adversarial examples by performing rewiring operations for the task of graph classification. More specifically, we treat the process of applying a series of rewiring operations to a given graph as a discrete Markov decision process (MDP) and use reinforcement learning to learn how to make these decisions. We demonstrate the effectiveness of the proposed algorithm on real-world graphs and further analyze how the adversarial changes in the graph structure affect both the graph embedding learned by the graph neural network model and the output label.
In this section, we introduce notations and the target graph convolutional model we seek to attack. We denote a graph as , where and are the sets of nodes and edges, respectively. The edges describe the relations between nodes, which can be described by an adjacency matrix . means and are connected, 0 otherwise. Each node in the graph has some features that are associated with it. These features are represented as a matrix , where the -th row of denotes the node features of node and is the dimension of features. Thus, an attributed graph can be represented as .
2.1 Graph Classification
In the setting of graph classification, we are given a set of graphs . Each of these graphs is associated with a label
. The task is to build a good classifier using the given set of graphs such that it can make correct predictions when new unseen graphs are fed into it. A graph classifier parameterized bycan be represented as , where denotes the label of a graph predicted by the classifier. The parameters in the classifier can be learned by solving the following optimization problem , where is used to measure the difference between the predicted and ground truth labels. Cross entropy is a commonly adopted measurement for .
2.2 Graph Convolution Networks
Recently, Graph Neural Networks have been shown to be effective in graph representation learning. These models usually learn node representations by iteratively aggregating, transforming and propagating node information. In this work, we adopt the graph convolutional networks (GCN) (Kipf and Welling, 2016). A graph convolutional layer in the GCN framework can be represented as
where is the output of the -th layer and represents the parameters of this layer. A GCN model usually consists of graph convolutional layers, with . The output of the GCN model is , which is denote as for convenience. To obtain a graph level embedding for graph to perform graph classification, we apply a global pooling over the node embeddings.
Different global pooling functions can be used, and we adopt the max pooling in this work. A multilayer perceptron (MLP) and softmax layer are then sequentially applied on the graph embedding to predict the label of the graph
where denotes the multilayer perceptron with parameters as . A GCN-based classifier for graph classification can be described using eq. (1), (2) and (3) as introduced above. For simplicity, we summarize it as , where includes all the parameters in the model.
3 Problem Formulation
In this work, we aim to build an attacker that takes a graph as input and modify the structure of the graph to fool a GCN classifier. Modifying a graph structure is equivalent to modify its adjacency matrix. The function of the attacker can be represented as follows
Given a classifier , the goal of the attacker is to modify the graph structure so that the classifier outputs a different label from what it originally predicted. Note here, we neglect the inside , as the classifier is already trained and fixed. Mathematically, the goal of the attacker can be represented as: .
As described above, in fact, the attacker is specifically designed for a given classifier . To reflect this in the notation, we now denote the attacker for the classifier as . In our work, the attacker has limited knowledge of the classifier. The only information the attacker can get from the classifier is the label of (modified) graphs. In other words, the classifier is treated as a black-box model for the attacker .
An important constraint to the attacker is that it is only allowed to make “unnoticeble” changes to the graph structure. To account for this, we propose the rewiring operation, which is supposed to make more subtle changes than adding or deleting edges. We will show that the rewiring operation can better preserve a lot of important properties of the graph compared to adding or deleting edges in Section 4.1. The definition of the proposed rewiring is given below:
A rewiring operation involves three nodes and it can be denoted as , where and . denotes the -th hop neighbors of and the sign stands for exclusion. The rewiring operation deletes the existing edge between nodes and , while adding an edge to connect nodes and .
The attacker is given a budget of proposed rewiring operations to modify the graph structure. A straightforward way to set is choosing a small fix number. However, it is likely that graphs in a given data set have various graph sizes. The same number of rewiring operations can affect the graphs of different size in various magnitude. Thus, it may not be appropriate to use the same for all the graphs. A more suitable way is to allow flexible number of rewiring operations according to the graph size. Thus, we propose to use for a given graph , where is a ratio.
The process of the attacker on a graph can be now denoted as:
where the right hand part means to sequentially apply the rewiring operations to the graph , and is the number of rewiring operations taken with .
4 Rewiring-based Attack to Graph Convolutional Networks
Next, we first discuss the properties of proposed rewiring operation to show its advantages. We then introduce the proposed attacking framework ReWatt based on reinforcement learning and rewiring.
4.1 Properties of the Proposed Rewiring Operation
The proposed rewiring operation has several advantages compared to simply adding or deleting edges. One obvious advantage of the proposed rewiring operation is that it does not change the number of nodes, the number of edges and the total degree of a graph. However, operations like “adding” or “deleting” edges may change those properties.
Many important graph properties are based on the eigenvalues of the Laplacian matrix of a graph (Chan and Akoglu, 2016) such as Algebraic Connectivity Fiedler (1973) and Effective Graph Resistance Ellens et al. (2011). A detailed description of Algebraic Connectivity and Effective Graph Resistance are given in Appendix . Next, we demonstrate that the proposed rewiring operation is likely to make smaller changes to eigenvalues, which result in unnoticeable changes under graph Laplacian based measures. For a graph with as its adjacency matrix, its Laplacian matrix is defined as , where is the diagonal degree matrix (Mohar et al., 1991). Let denote the eigenvalues of the Laplacian matrix arranged in the increasing order with being the corresponding eigenvectors. We show how a single proposed rewiring operation affects the eigenvalues. Our analysis is based on the following lemma:
(Stewart, 1990) Let be the eigen-pairs of a symmetric matrix . Given a perturbation to matrix , its eigenvalues can be updated by
The proof can be found in (Stewart, 1990). Using this lemma, we have the following corollary
For a given graph with Laplacian matrix , one proposed rewiring operation affects the eigen-value by , for , where
where denotes the -th value of the eigenvector .
The proof can be found in Appendix B (in the supplementary file).
Furthermore, each eigenvalue of the Laplacian matrix measures the “smoothness” of its corresponding eigenvector (Shuman et al., 2012; Sandryhaila and Moura, 2014). The “smoothness” of an eigenvector measures how different its elements are from their neighboring nodes. Thus, the first few eigenvectors with relatively small eigenvalues are rather “smooth”. Note that in the proposed rewiring operation, is the direct neighbor of and is the -hop neighbor of . Thus, the difference is expected to be smaller than the difference , where can be any other node that is further away. This means that the proposed rewiring operation (to -hop neighbors) is likely to make smaller changes to the first few eigenvalues than rewiring to any further away nodes or adding an edge between two nodes that are far away from each other.
4.2 Graph Adversarial Attack with Reinforcement Learning
Given a graph , the process of the attacker is a general decision making process , where is the set of actions, which consists of all valid rewiring operations, is the set of states that consists of all possible intermediate and final graphs after rewiring, is the transition dynamics that describes how a rewiring action changes the graph structure . is the reward function, which gives the reward for the action taken at a given state. Thus, the procedure of attacking a graph can be described by a trajectory , where . The key point for the attacker is to learn how to make the decision of picking a suitable rewiring action when at the state
. This can be done by learning a policy network to get the probabilityand sample the rewiring operation correspondingly. Modelling in this way, the decision making at a state is dependant on all its previous states, which could be difficult to model due to the long-term dependency. It is easy to notice that the intermediate states are all predicted to have the same label as the original graph. Thus, we can treat each of the states as a brand new graph to be attacked regardless of what leads to it. That is to say, the decision making at the state can be solely dependant on the current state, . Thus, we model the process of attack as a Markov Decision Process (MDP) Sutton and Barto (2018). Hence, we adopt reinforcement learning to learn how to make effective decisions. We name the proposed framework as ReWatt. The key elements of the environment for the reinforcement learning are defined as follows:
State Space The state space of the environment consists of all the intermediate graphs generated after all the possible rewiring operations. Action Space The action space consists of all the valid rewiring operations as defined in Definition 1. Note that the valid action space is dynamic when the state changes, as the -th hop neighbors are different in different states. State Transition Dynamics Given an action (rewiring operation) at state . The next state is achieved by deleting the edge between and in the current state and adding an edge to connect with . Reward Design The main goal of the attacker is to make the classifier predict a different label than originally predicted. We also encourage the attacker to take as few actions as possible so that the modification to the graph structure is minimal. Thus, we assign a positive reward when the attack is successful and assign a negative reward for each action step taken. The reward is given as
where is the negative reward to penalize each step taken. Similar to how we set a flexible rewiring budget , here we propose to use , which depends on the size of the graph.Termination The attack process will stop either when the number of actions reaches the budget or the attacker successfully “changed” the label of the slightly modified graph.
4.3 Policy Network
In this subsection, we introduce the policy network to learn the policy on top of the graph representations learned by GCN. However, this GCN is different from the target classifier one, since it has convectional layers. To choose a valid proposed rewiring action, we decompose the rewiring action to steps: 1) choosing an edge from the set of edges of the intermediate graph ; 2) determining or to be and the other to be ; and 3) choosing the third node from . Correspondingly, we decompose as follows
We design three policy networks based on GCN to estimate the three distributions in the right hand of the equation (7), which will be introduced next. To select an edge from the edge set , we generate the edge representation from the node representations learned by GCN. For an edge , the edge representation can be represented as , where is the graph representation of the state , is a function to combine the two node representations and denotes the concatenation operation. We include in the representation of the edge to incorporate the graph information when making the decision. The representation of all the edges in can be represented as a matrix
, where each row represents an edge. The probability distribution over all the edges can be represented as
where we use to denote a Multilayer Perceptron that maps
to a vector in, which, after going through the softmax layer, represents the probability of choosing each edge. Let denote the edge sampled according to eq. (8). To decide which node is going to be the first node, we estimate the probability distribution over these two nodes as
where for . The first node can be sampled from the two nodes according to eq. (9). We then proceed to estimate the probability distribution . For any node , we use to represent it. The representations for all the nodes in can be represented by a matrix with each row representing a node. The probability distribution of choosing the third node over all the candidate nodes can be modeled as:
The third node can be sampled from the set of candidate nodes according to the probability distribution in eq (10). An action can be generated by sequentially estimating and sampling from the probability distributions in eq. (8), (9) and (10).
4.4 Proposed Framework - ReWatt
With the rewiring and the policy network defined above, our overall framework is shown in Figure 1. With State , the Attacker uses GCN to learn node and edge embeddings, which are used as input to Policy Networks to make decision about the next action. Once the new action is sampled from the policy network, rewiring is performed on and we arrive in the new state . We query the black-box classifier to get the prediction , which is compared with to get reward. Policy gradient (Sutton and Barto, 2018) is adopted to learn the policies by maximizing the rewards.
In this section, we conduct experiments to evaluate the performance of the proposed framework ReWatt. We also carry out a study to analyze how the trained attacker works.
5.1 Attack Performance
To demonstrate the effectiveness of ReWatt, we conduct experiments on three widely used social network data sets (Kersting et al., 2016) for graph classification, i.e., REDDIT-MULTI-12K, REDDIT-MULTI-5K and IMDB-MULTI (Yanardag and Vishwanathan, 2015). The statistics can be found in Appendix C (in the supplementary file).
In this work, the classifier we target to attack is the GCN-based classifier as introduced in Section 2. We set the number of layers to and use max-pooling as the pooling function to get the graph representation. Note that we need to train the classifier using a fraction of the data and then treat the classifier as a black box to be attacked. We then use part of the remaining data to train the attacker and use the rest of the data to test the performance of the attacker. Thus, for each data set, we split it into three parts with the ratio of , where of the data set is used to train the classifier, of the data set is used to train the attacker and the remaining of the data set is used to test the performance of the attacker. For the REDDIT-MULTI-12K and REDDIT-MULTI-5K data sets, we set , and . As the size of the IMDB-MULTI data set is quite small, to have enough data for testing, we set , and .
We compare the attacking performance of the proposed framework with the RL-S2V proposed in (Dai et al., 2018), random selection method and some variants of our proposed framework. We briefly describe these baselines: 1) RL-S2V is a reinforcement learning based attack framework (Dai et al., 2018), which allows adding and deleting edges to the graph with a fixed budget for all the graphs; 2) Random denotes an attacker that performs the proposed rewiring operations randomly; 3) Random-s is also based on random rewiring. Note that ReWatt can terminate before using all the budget. We record the actual number of rewiring actions made in our method and only allow the Random-s to take exactly the same number of rewiring actions as ReWatt; 4) ReWatt-n denotes a variant of the ReWatt, where the negative reward is fixed to for all the graphs in the testing set; and 5) ReWatt-a is a variant of ReWatt, where we allow any nodes in the graph to be the third node instead of only -hop neighbors.
As RL-S2V only allows a fixed budget for the all the graphs, when comparing to it, for ReWatt, we also fix the number of proposed rewiring operations to a fixed number for all the graphs. Note that a single proposed rewiring operation involves two edges, thus, for a fair comparison, we allow the RL-S2V to take actions (adding/deleting edges). We set in the experiments. To compare with the random selection method and the variants of ReWatt, we use flexible budget, more specially, we allow at most proposed rewiring operations for graph . Here, is a fixed percentage and we set it to in our experiments. We use the success rate as measure to evaluate the performance of the attacker. A graph is said to be successfully attacked if its label is changed when it is modified within the given budget.
The results are shown in Table 1. From the table, we can make the following observations: 1) Compared to RL-S2V, ReWatt can perform more effective attacks. Especially, in the IMDB-MULTI data set, where ReWatt outperforms the RL-S2V with a large margin; 2) ReWatt outperforms the Random method as expected. Especially, ReWatt is much more effective than Random-s which performs exactly the same number of proposed rewiring operations ReWatt. This also indicates that the Random method uses more rewiring operations for successful attacking than ReWatt; 3) The variant ReWatt-a outperforms ReWatt, which means if we do not constraint the rewiring operation to 2-hop neighbors, the performance of ReWatt can be further improved. However, as we discussed in earlier sections, this may lead to more “noticeable” changes of the graph structure; and 4) ReWatt-n performs worse than our ReWatt, which shows the advancement of using a flexible reward design.
5.2 Attacker Analysis
In this subsection, we carry out experiments to analyze how ReWatt’s change in graph structure affects the graph representation calculated by eq. (2
) and the logits(the output immediately after the softmax layer of the classifier). For convenience, we denote the original graph as and the attacked graph as in this section. Correspondingly, the graph representation and logits for the original (attacked) graph are denoted as () and (), respectively. To measure the difference in graph representation, we used the relative difference in terms of -norm defined as . The logits denote the probability distribution that the given graph belongs to each of the classes. Thus, we use the KL-divergence Kullback (1997) to measure the difference between the logits of the original and attacked graphs , where is the number of classes in the data set and denotes the logit for the -th class.
We perform the experiments on the REDDIR-MULTI-12K data set under the setting of allowing at most proposed rewiring operations. The results for the graph representation and logits are shown in Figure 2 and Figure 3, respectively. The graphs in the testing set are separated in two groups, one group contains all the graphs successfully attacked by ReWatt (shown in Figure 1(a) and Figure 2(a)), and the other one contains those survived from ReWatt’s attack (shown in Figure 1(b) and Figure 2(b)). Note that, for comparison, we also include the results of Random-s on these two groups of graphs. In these figures, a single point represents a testing graph, the x-axis is the ratio , where is the number of rewiring operations ReWatt used before the attacking process terminating. Note that can be smaller than the budget as the process terminates once the attack successes.
As we can observed from the figures, compared with the Random-s, ReWatt can make more changes to both the graph representation and logits, using exactly the same number of proposed rewiring operations. Comparing Figure 1(a) with Figure 1(b), we find that the perturbation generated by ReWatt affects the graph representation a lot even when it fails to attack the graph. This means our attack is perturbing the graph structure in a right way to fool the classifier, although it fails potentially due to the limited budget. Similar observation can be made when we compare Figure 2(a) with Figure 2(b).
6 Related Work
In recent years, adversarial attacks on deep learning models have attracted increasing attention in the area of computer vision. Many deep models are found to be easily fooled by adversarial samples, which are generated by adding deliberately designed unnoticeable perturbation to normal images(Szegedy et al., 2013; Goodfellow et al., 2014). More algorithms with different level access to the target classifier have been proposed, including white-box attack models, which have access to the gradients (Moosavi-Dezfooli et al., 2016; Kurakin et al., 2016; Carlini and Wagner, 2017) and black-box attack model, which have limited access to the target classifier (Chen et al., 2017; Cheng et al., 2018; Ilyas et al., 2018).
Most of the aforementioned works are focusing in the computer vision domain, where the data sample can be represented in the continues space. Few attention has been payed into the discrete data structure such as graphs. Graph Neural Networks have been shown to bring impressive advancements to many different graph related tasks such as node classification and graph classification. Recent researches show that the graph neural networks are also venerable to adversarial attacks. (Zügner et al., 2018) proposed a greedy algorithm to perform adversarial attack to node classification task. Their algorithm tries to change the label of a target node by modifying both the graph structure and node features. (Dai et al., 2018) proposed a deep reinforcement learning based attacker to attack both the node classification and the graph classification task. (Zügner and Günnemann, 2019) designed an algorithm to impair the overall performance of node classification based on meta learning. All the three mentioned methods modify the graph structure by adding or deleting edges. A more recent work Wang et al. (2018) on attacking node classifications proposed to modify the graph structure by adding fake nodes. In this work, we propose to modify the graph structure using rewiring, which is shown to make less noticeable changes to the graph structure.
In this paper, we proposed a graph rewiring operation, which affect the graph structure in a less noticeable way than adding/deleting edges. The rewiring operation preserves some basic graph properties such as number of nodes and number of edges. We then designed an attacker ReWatt based on the rewiring operations using reinforcement learning. Experiments in real world data sets show the effectiveness of the proposed framework. Analysis on how the graph representation and logits change while the graph being attacked provide us with some insights of the attacker.
- Bruna et al.  Joan Bruna, Wojciech Zaremba, Arthur Szlam, and Yann LeCun. Spectral networks and locally connected networks on graphs. arXiv preprint arXiv:1312.6203, 2013.
- Carlini and Wagner  Nicholas Carlini and David Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
- Chan and Akoglu  Hau Chan and Leman Akoglu. Optimizing network robustness by edge rewiring: a general framework. Data Mining and Knowledge Discovery, 30(5):1395–1425, 2016.
Chen et al. 
Pin-Yu Chen, Huan Zhang, Yash Sharma, Jinfeng Yi, and Cho-Jui Hsieh.
Zoo: Zeroth order optimization based black-box attacks to deep neural
networks without training substitute models.
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pages 15–26. ACM, 2017.
- Cheng et al.  Minhao Cheng, Thong Le, Pin-Yu Chen, Jinfeng Yi, Huan Zhang, and Cho-Jui Hsieh. Query-efficient hard-label black-box attack: An optimization-based approach. arXiv preprint arXiv:1807.04457, 2018.
Dai et al. 
Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song.
Adversarial attack on graph structured data.
International Conference on Machine Learning, pages 1123–1132, 2018.
- Defferrard et al.  Michaël Defferrard, Xavier Bresson, and Pierre Vandergheynst. Convolutional neural networks on graphs with fast localized spectral filtering. In Advances in neural information processing systems, pages 3844–3852, 2016.
- Ellens et al.  Wendy Ellens, FM Spieksma, P Van Mieghem, A Jamakovic, and RE Kooij. Effective graph resistance. Linear algebra and its applications, 435(10):2491–2506, 2011.
- Fiedler  Miroslav Fiedler. Algebraic connectivity of graphs. Czechoslovak mathematical journal, 23(2):298–305, 1973.
- Ghosh and Boyd  Arpita Ghosh and Stephen Boyd. Growing well-connected graphs. In Proceedings of the 45th IEEE Conference on Decision and Control, pages 6605–6611. IEEE, 2006.
- Goodfellow et al.  Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.
- Hamilton et al.  Will Hamilton, Zhitao Ying, and Jure Leskovec. Inductive representation learning on large graphs. In Advances in Neural Information Processing Systems, pages 1024–1034, 2017.
- Ilyas et al.  Andrew Ilyas, Logan Engstrom, Anish Athalye, and Jessy Lin. Black-box adversarial attacks with limited queries and information. arXiv preprint arXiv:1804.08598, 2018.
- Kersting et al.  Kristian Kersting, Nils M. Kriege, Christopher Morris, Petra Mutzel, and Marion Neumann. Benchmark data sets for graph kernels, 2016.
- Kipf and Welling  Thomas N Kipf and Max Welling. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907, 2016.
- Kullback  Solomon Kullback. Information theory and statistics. Courier Corporation, 1997.
- Kurakin et al.  Alexey Kurakin, Ian Goodfellow, and Samy Bengio. Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533, 2016.
- Mohar et al.  Bojan Mohar, Y Alavi, G Chartrand, and OR Oellermann. The laplacian spectrum of graphs. Graph theory, combinatorics, and applications, 2(871-898):12, 1991.
Moosavi-Dezfooli et al. 
Seyed-Mohsen Moosavi-Dezfooli, Alhussein Fawzi, and Pascal Frossard.
Deepfool: a simple and accurate method to fool deep neural networks.
Proceedings of the IEEE conference on computer vision and pattern recognition, pages 2574–2582, 2016.
- Sandryhaila and Moura  Aliaksei Sandryhaila and Jose MF Moura. Discrete signal processing on graphs: Frequency analysis. IEEE Transactions on Signal Processing, 62(12):3042–3054, 2014.
- Shuman et al.  David I Shuman, Sunil K Narang, Pascal Frossard, Antonio Ortega, and Pierre Vandergheynst. The emerging field of signal processing on graphs: Extending high-dimensional data analysis to networks and other irregular domains. arXiv preprint arXiv:1211.0053, 2012.
- Stewart  Gilbert W Stewart. Matrix perturbation theory. 1990.
- Sutton and Barto  Richard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
- Sydney et al.  Ali Sydney, Caterina Scoglio, and Don Gruenbacher. Optimizing algebraic connectivity by edge rewiring. Applied Mathematics and computation, 219(10):5465–5479, 2013.
- Szegedy et al.  Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199, 2013.
- Wang et al.  Xiaoyun Wang, Joe Eaton, Cho-Jui Hsieh, and Felix Wu. Attack graph convolutional networks by adding fake nodes. arXiv preprint arXiv:1810.10751, 2018.
- Yanardag and Vishwanathan  Pinar Yanardag and SVN Vishwanathan. Deep graph kernels. In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 1365–1374. ACM, 2015.
- Ying et al.  Rex Ying, Jiaxuan You, Christopher Morris, Xiang Ren, William L Hamilton, and Jure Leskovec. Hierarchical graph representation learning withdifferentiable pooling. arXiv preprint arXiv:1806.08804, 2018.
- Zhang et al.  Muhan Zhang, Zhicheng Cui, Marion Neumann, and Yixin Chen. An end-to-end deep learning architecture for graph classification. In Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
- Zitnik et al.  Marinka Zitnik, Rok Sosič, Marcus W. Feldman, and Jure Leskovec. Evolution of resilience in protein interactomes across the tree of life. Proceedings of the National Academy of Sciences, 116(10):4426–4433, 2019.
- Zügner et al.  Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. Adversarial attacks on neural networks for graph data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pages 2847–2856. ACM, 2018.
- Zügner and Günnemann  Daniel Zügner and Stephan Günnemann. Adversarial attacks on graph neural networks via meta learning. In International Conference on Learning Representations, 2019.
Appendix A Graph Laplacian Based Measures
Many important graph properties are based on the eigenvalues of the Laplacian matrix of a graph [Chan and Akoglu, 2016]. Here we list few:
Algebraic Connectivity The algebraic connectivity of a graph is the second-smallest eigenvalue of its Laplacian matrix [Fiedler, 1973]. Note that we only consider connected graphs in this work, so it is always larger than . The larger the algebraic connectivity is, the more difficult it is to separate the graph into components (i.e., more edges need to be removed). The algebraic connectivity has previously been applied to measure network robustness Sydney et al. .
Effective Graph Resistance The effective graph resistance is a graph measure derived from the field of electric circuit analysis, where it is defined as the summation of effective resistance over all node pairs [Ellens et al., 2011]. The effective graph resistance can be represented using the eigenvalues of Laplacian matrix as follows [Ellens et al., 2011]
By Corollary 2, we can represent the change of the algebraic connectivity as:
According to the above discussion, is expected to be smaller for the operation of rewiring to -hop neighbor. Thus, the rewiring to -hop neighbor operation is expected to perturb the algebraic connectivity less compared with adding an edge between two nodes that are far away from each other. A similar argument can be built for effective graph resistance.
Appendix B Proof of Collary 1
For a given graph with Laplacian matrix , one proposed rewiring operation affects the eigen-value by , for , where
where denotes the -th value of the eigenvector .
Let denotes the change in the Laplacian matrix after applying the rewiring operation to graph . Then we have , , , and elsewhere. Thus
which completes the proof. ∎
Appendix C Statistics of the Datasets
The statistics of the datasets are given in Table 2.
|# graphs||# labels|