Log In Sign Up

Node Injection Attacks on Graphs via Reinforcement Learning

by   Yiwei Sun, et al.
Penn State University

Real-world graph applications, such as advertisements and product recommendations make profits based on accurately classify the label of the nodes. However, in such scenarios, there are high incentives for the adversaries to attack such graph to reduce the node classification performance. Previous work on graph adversarial attacks focus on modifying existing graph structures, which is infeasible in most real-world applications. In contrast, it is more practical to inject adversarial nodes into existing graphs, which can also potentially reduce the performance of the classifier. In this paper, we study the novel node injection poisoning attacks problem which aims to poison the graph. We describe a reinforcement learning based method, namely NIPA, to sequentially modify the adversarial information of the injected nodes. We report the results of experiments using several benchmark data sets that show the superior performance of the proposed method NIPA, relative to the existing state-of-the-art methods.


page 1

page 2

page 3

page 4


Adversarial Camouflage for Node Injection Attack on Graphs

Node injection attacks against Graph Neural Networks (GNNs) have receive...

Adversarial Attack on Graph Structured Data

Deep learning on graph structures has shown exciting results in various ...

Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning

Graph Neural Networks (GNNs) have drawn significant attentions over the ...

Black-box Node Injection Attack for Graph Neural Networks

Graph Neural Networks (GNNs) have drawn significant attentions over the ...

Anti-perturbation of Online Social Networks by Graph Label Transition

Numerous popular online social networks (OSN) would classify users into ...

On the Privacy of dK-Random Graphs

Real social network datasets provide significant benefits for understand...

Scalable Attack on Graph Data by Injecting Vicious Nodes

Recent studies have shown that graph convolution networks (GCNs) are vul...

1. Introduction

Graphs, in which nodes and their attributes denote real-world entities (e.g., individuals) and links encode different types of relationships (e.g., friendship) between entities, are ubiquitous in many domains, such as social networks, electronic commerce, politics, counter-terrorism, among others. Many real-world applications e.g., targeting advertisements and product recommendations, rely on accurate methods for node classification (Aggarwal, 2011; Bhagat et al., 2011). However, in high-stakes scenarios, such as political campaigns and e-commerce, there are significant political, financial, or other incentives for adversaries to attack such graphs to achieve their goals. For example, political adversaries may want to propagate fake news in social medias to damage an opponent’s electoral prospects (Allcott and Gentzkow, 2017). The success of such attack depends on a large extent of the adversaries’ ability to misclassify the graph classifier.

Figure 1. (a) is the toy graph where the color of a node represents its label; (b) shows the poisoning injection attack performed by a dummy attacker; (c) shows the poisoning injection attack performed by a smart attacker. The injected nodes are circled with dashed line.

Recent works (Zügner et al., 2018; Wu et al., 2019; Dai et al., 2018) have shown that even the state-of-the-art graph classifiers are susceptible to attacks which aim at adversely impacting the node classification accuracy. Because graph classifiers are trained based on node attributes and the link structure in the graph, an adversary can attack the classifier by poisoning the graph data used for training the classifier. Such an attack can be (i) node specific, as in the case of a target evasion attack (Zügner et al., 2018; Wu et al., 2019) that is designed to ensure that the node classifier is fooled into misclassifying a specific node; or (ii) non-target (Dai et al., 2018), as in the case of attacks that aim to reduce the accuracy of node classification across a graph. As shown by (Zügner et al., 2018; Wu et al., 2019; Dai et al., 2018), both node specific and non-target attacks can be executed by selectively adding fake (adversarial) edges or selectively remove (genuine) edges between the existing nodes in the graph so as to adversely impact the accuracy of the resulting graph classifiers. However, the success of such attack strategy requires that the adversary is able to manipulate the connectivity between the nodes in the graph, e.g., Facebook, which requires breaching the security of the requisite subset of members (so as to modify their connectivity), or breaching the security of the database that stores the graph data, or manipulating the requisite members into adding or deleting their links to other selected members. Consequently, such attack strategy is expensive for the adversary to execute without being caught.

In this paper, we introduce a novel graph non-target attack aimed at adversely impacting the accuracy of graph classifier. We describe a node injection poisoning attack procedure that provides an effective way to attack a graph by introducing fake nodes with fake labels that link to genuine nodes so as to poison the graph data. Unlike previously studied graph attacks, the proposed strategy enables an attacker to boost the node misclassification rate without changing the link structure between the existing nodes in the graph. For example, in Facebook network, an attacker could simply creates fake member profiles and manipulate real members to link to the fake member profiles, so as to change the predicted labels of some of the real Facebook members. Such attack is easier and less expensive to execute compared to those that require manipulating the links between genuine nodes in the graph.

Establishing links between an injected adversarial (fake) node to existing nodes in the original graph or to other injected adversarial nodes is a non-trivial task. As shown in Figure 1, both the attackers in (b) and (c) want to inject two fake nodes into the clean graph in (a). However, it is obviously presented in Figure 1 that the ”smart attacker” who carefully designs the links and label of the injected nodes could better poison the graph than the ”dummy attack” who generates the links and labels at random. We also observe that such task is naturally formulated as a Markov decision process (MDP) and reinforcement learning algorithms, e.g., Q-learning (Watkins and Dayan, 1992) offers a natural framework for solving such problems (Cai et al., 2017; Wei et al., 2017; Sutton and Barto, 2018). However, a representation that directly encodes graph structure as states and addition and deletion of links leads to an intractable MDP. Hence, we adopt a hierarchical Q-learning network (HQN) to learn and exploit a compact yet accurate encoding of the Q function to manipulate the labels of adversarial nodes as well as their connectivity to other nodes. We propose a framework named NIPA to execute the Node Injection Poisoning Attack. Training the NIPA presents some non-trivial challenges: (i) NIPA has to sequentially guide fake nodes to introduce fake links to other (fake or genuine) nodes and then adversarially manipulate the labels of fake nodes; (ii) The reward function needs to be carefully designed to steer the NIPA to execute effective NIA.

The key contributions of the paper are as follows:

  • We study the novel non-target graph node injection attack problem to adversely impact the accuracy of a node classifier without manipulating the link structure of the original graph.

  • We propose a new framework NIPA, a hierarchical Q-learning based method that can be executed by an adversary to effectively perform the poisoning attack. NIPA successfully addresses several non-trivial challenges presented by the resulting reinforcement learning problem.

  • We present results of experiments with several real-world graphs data that show that NIPA outperforms the state-of-the-art non-target attacks on graph.

The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 formally defines the non-target node injection poisoning attack problem. Section 4 gives the details of the proposed NIPA. Section 5 shows empirical evaluation with discussion and section 6 presents the conclusion and future work.

2. Related Work

Our study falls in the general area of data poisoning attack (Biggio and Roli, 2018)

, which aims at attack the model by corrupting the training data. Data poisoning attacks have been extensively studied for the non graph-structured data, including supervised learning 

(Biggio et al., 2012; Mei and Zhu, 2015; Li et al., 2016)

, unsupervised feature selection 

(Xiao et al., 2015), and reinforcement learning (Gleave et al., 2019; Jun et al., 2018; Ma et al., 2018) etc. However, little attention has been given to understanding how to poison the graph structured data.

2.1. Adversarial Attacks on Graphs

The previous works (Szegedy et al., 2013; Goodfellow et al., 2015)

have shown the intriguing properties of neural networks as they are ”vulnerable to adversarial examples” in computer vision domain. For example, in 

(Goodfellow et al., 2015), the authors show that some deep models are not resistant to adversarial perturbation and propose the Fast Gradient Sign Method (FGSM) to generate the adversarial image samples to attack such models. Not only in computer vision domain, recently such ”intriguing properties” have also been observed in the graph mining domain. The research communities show that graph neural networks are also vulnerable to adversarial attacks. Nettack (Zügner et al., 2018) is one of the first methods that perturbs the graph data to perform poisoning/training-time attack on GCN (Kipf and Welling, 2016) model. RL-S2V (Dai et al., 2018) adopts reinforcement learning for evasion/testing-time attack on graph data. Different from previous methods, (Chen et al., 2018) and (Wu et al., 2019) focus on poison attack by gradient information. (Chen et al., 2018) attacks the graph in embedding space by iteratively modifying the connections of nodes with maximum absolute gradient. (Wu et al., 2019) proposes to attack the graph structured data by use the integrated gradients approximating the gradients computed by the model and performs perturbation on data by flipping the binary value. (Zügner and Günnemann, 2019) modifies the training data and performs poisoning attacks via meta-learning. Though these graph adversarial attacks are effective, they focus on manipulating links among existing nodes in a graph, which are impractical as these nodes/individuals are not controlled by the attacker.

Our framework is inherently different from existing work. Instead of manipulating links among existing nodes, our framework inject fake nodes to the graph (say fake accounts in Facebook), and manipulate the label and links of fake nodes to poison the graph.

2.2. Reinforcement Learning in Graph

Reinforcement learning(RL) has achieved significant successes in solving challenging problems such as continuous robotics control (Schulman et al., 2015) and playing atari games (Mnih et al., 2015). However, there has been little previous work exploring RL in graph mining domain. Graph Convolutional Policy Network (GCPN) (You et al., 2018) is one of the works which adopts RL in graph mining. The RL agent is trained on the chemistry aware graph environment and learns to generate molecular graph. (Do et al., 2019) is another work which defines chemical molecular reaction environment and trains the RL agent for predicting products of the chemical reaction. The most similar work to ours is RL-S2V (Dai et al., 2018) which adopts RL for target evasion attack on graph by manipulating the links among existing nodes; while we investigate RL for non-target injection poisoning attack and manipulate labels and links of fake nodes.

3. Problem Definition

In this section, we formally define the problem we target. We begin by introducing the definition of semi-supervised node classification as we aim to poison the graph for manipulating label classification of graph classifiers. Note that the proposed framework is a general framework which can also be used to poison the graph for other tasks. We leave other tasks as future work.

Definition 3.1 ().

(Semi-Supervised Node Classification) Let be an attributed graph, where denotes the node set, means the edge set and represents the nodes features. is the labeled node set and is the unlabeled node set with . Semi-supervised node classification task aims at correctly labeling the unlabeled nodes in with the graph classifier .

In semi-supervised node classification task, the graph classifier which learns the mapping aims to correctly assign the label to node with aggregating the structure and feature information. The classifier is parameterized by and we denote the classifier as . For simplicity of notations, we use as the classier prediction on and as the ground truth label of . In the training process, we aim to learn the optimal classifier with the corresponding parameter defined as following:



is the loss function such as cross entropy. To attack the classifier, there are mainly two attacking settings including poisoning/training-time attack and evasion/testing-time attack.

In poisoning attacks, the classifier uses the poisoned graph for training while in evasion attack, adversarial examples are included in testing samples after is trained on clean graph. In this paper, we focus on non-targeted graph poisoning attack problem where the attacker poisons the graph before training time to reduce the performance of graph classifier over unlabeled node set .

Definition 3.2 ().

(Graph Non-Targeted Poisoning Attack) Given the attributed graph , the labeled node set , the unlabeled node set and the graph classifier , the attacker aims to modify the graph within a budget to reduce the accuracy of classifier on .

As the attacking process is supposed to be unnoticeable, the number of allowed modifications of attacker on is constrained by the budget . Based on the problem, we propose the node injection poisoning method to inject a set of adversarial nodes into the node set to perform graph non-targeted poisoning attack.

Definition 3.3 ().

(Node Injection Poisoning Attack) Given the clean graph , the attacker injects the poisoning node set with its adversarial features and labels into the clean node set . After injecting , the attack creates adversarial edges to poison . is the poisoned graph where , , with is append operator and is the labeled set with . In the poisoning attack, the graph classifier is trained on poisoned graph .

With the above definitions and notations, the objective function for the non-targeted node injection poisoning attack is defined as:


Here represents the label of the unlabeled node . If the attacker has the ground truth for the unlabeled data (unlabel is to end-user in this case), then is ground truth label; if attacker doesn’t have the access to the ground true, then is predicted by graph classifier trained on clean graph. is the indicator function with if is true and 0 otherwise. The attacker maximizes the prediction error for the unlabeled nodes in as in Eq. (2), subject to two constraints. The constrain (4) enforces the classifier is learned from the poisoned graph . and constrain (4) restricts the modifications of adversarial edges by the attacker in the budget

In this paper, we use the Graph Convolution Network (GCN) (Kipf and Welling, 2016) as our graph classifier

to illustrate our framework as it is widely adopted graph neural model for node classification task. In the convolutional layer of GCN, nodes first aggregate information from its neighbor nodes followed by the non-linear transformation such as ReLU. The equation for a two-layer GCN is defined as:


where denotes the normalized adjacency matrix,

denotes adding the identity matrix

to the adjacent matrix . is the diagonal matrix with on-diagonal element as . and are the weights of first and second layer of GCN, respectively. is adopted. The loss function in GCN is cross entropy.

4. Proposed Framework

Figure 2. An overview of the Proposed Framework NIPA for Node Injection Attack on Graphs

To perform the non-target node injecting poisoning attack, we propose to solve the optimization problem in Eq.(2) via deep reinforcement learning. Compared with directly optimizing the adjacency matrix with traditional matrix optimization techniques, the advantages of adopting deep reinforcement learning are two folds: (i) Adding edges and changing labels of fake nodes are naturally sequential decision making process. Thus, deep reinforcement learning is a good fit for the problem; (ii) The underlying structures of graphs are usually highly nonlinear (Wang et al., 2016), which adds the non-linearity to the decision making process. The deep non-linear neural networks of the Q network could better capture the graph non-linearity and learn the semantic meaning of the graph for making better decisions.

An illustration of the proposed framework is shown in Figure 2. The key idea of our proposed framework is to train the deep reinforcement learning agent which could iteratively perform actions to poison the graph. The actions includes adding adversarial edges and modifying the labels of injected nodes. More specifically, the agent needs to firstly pick one node from injected nodes set and select another node from poisoned node set to add the adversarial edge, and modify the label of the injected nodes to attack the classifier . We design reinforcement learning environment and reward according to the optimization function to achieve this.

Next, we describe the details of the proposed method and present the RL environment design, the deep Q

4.1. Attacking Environment

We model the proposed poisoning attack procedure as a Finite Horizon Markov Decision Process . The definition of the MDP contains state space , action set

, transition probability

, reward , discount factor .

4.1.1. State

The state contains the intermediate poisoned graph and labels information of the injected nodes at the time . To capture the highly non-linear information and the non-Euclidean structure of the poisoned graph , we embed as with aggregating the graph structure information via designed graph neural networks. encodes the adversarial label information with neural networks. The details of the state representation is described in following subsection.

Since in the injection poisoning environment, the node set remains identical thus the DRL agent performs poisoning actually on the edge set .

4.1.2. Action

In the poisoning attack environment, the agent is allowed to (1) add the adversarial edges within the injected nodes or between the injected nodes and the clean nodes; (2) modify the adversarial labels of the injected nodes. However, directly adding one adversarial edge has possible choices and modifying the adversarial label of one injected node requires space where is the number of label categories. Thus, performing one action that contains both adding an adversarial edge and changing the label of a node has search space as , which is extremely expensive especially in large graphs. Thus, we adopt hierarchical action to decompose such action and reduce the action space to enable efficient exploration inspired by previous work (Dai et al., 2018).

As shown in Figure 2, in NIPA, the agent first performs an action to select one injected node from . The agent then picks another node from the whole node set via action . The agent connects these two selected nodes to forge the adversarial edge. Finally, the agent modifies the label of the selected fake node through action . By such hierarchical action , the action space is reduced from to . With the hierarchy action , , , the trajectory of the proposed MDP is .

4.1.3. Policy network

As both of previous work (Dai et al., 2018) and our our preliminary experiments show that Q-learning works more stable than other policy optimization methods such as Advantage Actor Critic, we focus on modeling policy network with Q-learning. Q-learning is an off-policy optimization which fits the Bellman optimality equation as:


The greedy policy to select the action with respect to is:


As we explain in the above subsection that performing one poisoning action requires searching in space and we perform hierarchical actions other than one action, we cannot directly follow the policy network in Eq.(6) and Eq.(7). Here, we adopt hierarchical Q function for the actions and we propose the hierarchical framework which integrates three DQNs. The details of the proposed DQNs are presented in following section.

4.1.4. Reward

As the RL agent is trained to enforce the misclassification of the graph classifier , we need to design the reward accordingly to guide the agent. The reasons why we need to design novel reward function other than using the widely adopted binary sparse rewards are two folds: (1) as our trajectory in the attacking environment is usually long, we need the intermediate rewards which give feedback to the RL agent on how to improve its performance on each state; (2) different from the target attack that we know whether the attack on one targeted node is success or not, we perform the non-target attack over graph thus accuracy is not binary The reward of the current state and actions for the agent is designed according to the poisoning objective function shown in Eq. (2). For each state , we define the attack success rate as:


Here is the validation set used to compute the reward. Note that the is not the graph classifier that evaluates the final classification accuracy. It represents the simulated graph classifier designed by attacker to acquire the state and actions reward. However, directly using the success rate as the reward would increase the instability of training process since the accuracy might not differ a lot for two consecutive state. In this case, we design the guiding binary reward to be one if the action could reduce the accuracy of attacker’s simulated graph classifier , and to be negative one vice versa. The design the guiding reward is defined as follows:


Our preliminary experimental results show that such guiding reward is effective in our case.

4.1.5. Terminal

In the poisoning attacking problem, the number of allowed adding adversarial edges is constrained by the budget for the unnoticeable consideration. So in the poisoning reinforcement learning environment, once the agent adds edges, it stops taking actions. In terminal state , the poisoned graph contains more adversarial edges compared to the clean graph .

4.2. State Representation

As mentioned above, the state contains the poisoned graph and injected nodes labels at time . To represent the non-Euclidean structure of the poisoned graph

with vector

, the latent embedding of the each node in is firstly learned by struct2vec (Dai et al., 2016) using the discriminative information. Then the state vector representation is obtained by aggregating the embedding of nodes as:


To represent the label of the injected nodes, we use the two layer neural networks to encode the as . Note that for the notation compact and consistency consideration, represents embedding of the state, and and are the embeddings of the node selected by action and label selected by action respectively in the following paper.

4.3. Hierarchical Q Network

In Q learning process, given the state and action , the action-value function is supposed to give the scores of current state and selected actions to guide the RL agent. However, as the action is decomposed into three hierarchical actions for the efficiency searching consideration, it would be hard to directly design the and apply one policy network to select hierarchical actions.

To overcome this problem, we adopt hierarchical deep Q networks which integrates three DQNs to model the Q values over the actions. Figure (2) illustrates the framework of selection action at time . The first DQN guides the policy to select a node from injected node set ; based on , the second DQN learns the policy to select a second node from the node set , which completes an edge injection by connecting the two nodes. The third DQN learns the policy to modify the label of the first selected injected node.

The agent firstly selects one node from the injected node set and calculate the value based on the action-value function as:


where represents the trainable weights of the first DQN and is the concatenation operation. The action-value function estimates the Q value given the state and action. The greedy policy to select the action based on optimal action-value function in eq.(12) is defined as follows:


With the first action selected, the agent picks the second action hierarchically based on as:


where is the trainable weights. The action value function scores the state, and the action and . The greedy policy to make the second action with the optimal in eq.(14) is defined as follows:


Note that the agent only modifies the label of the selected injected node , the action-value function for the third action is not related to the action . The action-value function is defined as follows:


In Eq.(16), represents the trainable weights in . The action value function models the score of changing the label of the injected node . The greedy policy to such action is defined as follows:


4.4. Training Algorithm

To train the proposed hierarchy DQNs and the graph embedding method structure2vec, we use the experience replay technique with memory buffer . The high level idea is simulating the selection process to generate training data, which are stored in memory buffer, during the reinforcement learning training process. During training, the experience where is drawn uniformly at random from the stored memory buffer . The Q-learning loss function is similar to (Mnih et al., 2015) as:


where represents target action-value function and its parameters are updated with every C steps. To improve the stability of the algorithm, we clip the error term between and . The agent adopts -greedy policy that select a random action with probability . The overall training framework is summarized in Algorithm 1.

Input: clean graph , labeled node set , budget , number of injected nodes , training iteration
Output: and
1 Initialize action-value function Q with random parameters ;
2 Set target function with parameters ;
3 Initialize replay memory buffer ;
4 Randomly assign Adversarial label ;
5 while episode  do
6       while  do
7             Select based on Eq.(12);
8             Select and based on Eq.(14) and Eq.(14);
9             Compute according to Eq.(8) and Eq.(10);
10             Set ;
11             , ;
12             Store in memory ;
13             Sample minibatch transition randomly from ;
14             Update parameter according to Eq.(18);
15             Every C steps ;
16       end while
18 end while
Algorithm 1 The training algorithm of framework NIPA

In the proposed model, we use two layer multi-layer perceptrons to implement all the trainable parameters

in action-value functions , , and structure2vec. Actually, more complex deep neural networks could replace the models outlined here. We leave exploring feasible deep neural networks as a future direction.

5. Experiments

In this section, we introduce the experiment setting including baseline datasets and comparing poisoning attack methods. Moreover, we conduct experiments and present results to answer the following research questions: (RQ1) Can the NIPA effectively poison the graph data via node injection? (RQ2) Whether the poisoned graph remains the key statistics after the poison attack? (RQ3) How the proposed framework performances under different scenarios? Next, we first introduce the experimental settings followed by experimental results to answer the three questions.

5.1. Experiment Setup

5.1.1. Datasets

We conduct experiments on three widely used benchmark datasets for node classification, which include CORA-ML (McCallum et al., 2000; Bojchevski and Günnemann, 2018), CITESEER (Giles et al., 1998) and DBLP (Pan et al., 2016). Following (Zügner and Günnemann, 2019), we only consider the largest connected component (LCC) of each graph data. The statistics for the datasets are summarized in Table 1. For each dataset, we randomly split the nodes into (20%) labeled nodes for training procedure and (80%) unlabeled nodes as test set to evaluate the model. The labeled nodes are further equally split into training and validation sets. We perform the random split five times and report averaged results.

Datasets —L—
CITESEER 2,110 3,757 6
CORA-ML 2,810 7,981 7
PUBMED 19,717 44,324 3
Table 1. Statistics of benchmark datasets

5.1.2. Baseline Methods

Though there are several adversarial attack algorithms on graphs such as Nettack (Zügner et al., 2018) and RL-S2v (Dai et al., 2018), most of them are developed for manipulating links among existing nodes in graph, which cannot be easily modified in our attacking setting for node injection attack. Thus, we don’t compare with them. Since node injection attack on graphs is a novel task, there are very few baselines we can compare with. We select following four baselines, with two from classical graph generation models, one by applying the technique of fast gradient attack and a variant of NIPA.

  • [leftmargin=*]

  • Random Attack: The attacker first adds adversarial edges between the injected nodes according to Erdos-Renyi model (Erdős and Rényi, 1960) , where the probability is the average degree of the clean graph to make sure the density of the injected graph is similar to the clean graph. The attacker then randomly add adversarial edges connecting the injected graph and clean graph until the budget is used ups.

  • Preferential attack (Barabási and Albert, 1999): The attacker iteratively adds the adversarial edges according to preferential attachment mechanism. The probability of connecting the injected node to the other node is proportional to the node degrees. The number of adversarial edges is constrained by the budget .

  • Fast Gradient Attack(FGA) (Chen et al., 2018): Gradient based methods are designed to attack the graph data by perturbing the gradients. In FGA, the attacker removes/adds the adversarial edges guided by edge gradient.

  • NIPA-w/o: This is a variant of the proposed framework NIPA where we don’t optimize w.r.t the label of fake nodes, i.e., the labels of the fake nodes are randomly assigned.

Dataset Methods
Random 0.7582 0.0082 0.7532 0.0130 0.7447 0.0033 0.7147 0.0122
CITESEER Preferrential 0.7578 0.0060 0.7232 0.0679 0.7156 0.0344 0.6814 0.0131
FGA 0.7129 0.0159 0.7117 0.0052 0.7103 0.0214 0.6688 0.0075
NIPA-wo(ours) 0.7190 0.0209 0.6914 0.0227 0.6778 0.0162 0.6301 0.0182
NIPA (ours) 0.7010 0.0123 0.6812 0.0313 0.6626 0.0276 0.6202 0.0263
Random 0.8401 0.0226 0.8356 0.0078 0.8203 0.0091 0.7564 0.0192
CORA-ML Preferrential 0.8272 0.0486 0.8380 0.0086 0.8038 0.0129 0.7738 0.0151
FGA 0.8205 0.0044 0.8146 0.0041 0.7945 0.0117 0.7623 0.0079
NIPA-w/o (ours) 0.8042 0.0190 0.7948 0.0197 0.7631 0.0412 0.7206 0.0381
NIPA (ours) 0.7902 0.0219 0.7842 0.0193 0.7461 0.0276 0.6981 0.0314
Random 0.8491 0.0030 0.8388 0.0035 0.8145 0.0076 0.7702 0.0126
PUMBED Preferrential 0.8487 0.0024 0.8445 0.0035 0.8133 0.0099 0.7621 0.0096
FGA 0.8420 0.0182 0.8312 0.0148 0.8100 0.0217 0.7549 0.0091
NIPA-w/o(ours) 0.8412 0.0301 0.8164 0.0209 0.7714 0.0195 0.7042 0.0810
NIPA (ours) 0.8242 0.0140 0.8096 0.0155 0.7646 0.0065 0.6901 0.0203
Table 2. classification results after attack

The Fast Gradient Attack(FGA) (Chen et al., 2018) is not directly applicable in injection poisoning setting, since the injected nodes are isolated at the beginning and would be filtered out by graph classifier. Here we modify the FGA for fair comparison. The FGA method is performed on the graph poisoned by preferential attack. After calculating the gradient with and , the attack adding/remove the adversarial edges between according to the largest positive/negative gradient. The attack only add and remove one feasible adversarial edge are each iteration so that the number of the adversarial edges is still constrained by budget . The attacker is allowed to perform 20* times modifications in total suggested by (Chen et al., 2018).

5.2. Attack Performance Comparison

To answer RQ1, we evaluate how the node classification accuracy degrades on the poisoned graph compared with the performance on the clean graph. The larger decrease the performance is on the poisoned graph, the more effective the attack is.

Node Classification on Clean Graph As the Nettack (Zügner et al., 2018) points out that “poisoning attacks are in general harder and match better the transductive learning scenario”, we follow the same poisoning transductive setting in this paper. The parameters of GCN are trained according to Eq. (1). We report the averaged node classification accuracy over five runs in Table. 3 to present the GCN node classification accuracy on clean graph. Note that if the poisoning nodes are injected with the budget , such isolated nodes would be filtered out by GCN and the classification results remain the same as in Table. 3.

Clean data 0.7730 0.0059 0.8538 0.0038 0.8555 0.0010
Table 3. Node classification results on clean graph

Node Classification on Poisoned Graph

In poisoning attacking process, the attacking budget which controls the number of added adversarial edges is one important factor. On the one hand, if the budget is limited, eg., , then at least injected nodes are isolated. Clearly, isolated nodes have no effect on the label prediction as they are not really injected into the environment. On the other hand, if the budget is large, the density of the injected graph is different from the clean graph and such injected nodes might be detected by the defense methods. Here, to make the poisoned graph has the similar density with the clean graph and simulates the real world poisoning attacking scenario, we set where is the injected nodes ratio compared to the clean graph and is the average degree of the clean graph . The injected nodes number is . We will evaluate how effective the attack is when the injected nodes can have different number of degrees in Section 5.4.1. To have comprehensive comparisons of the methods, we vary as . We don’t set since we believe that too much injected nodes could be easily noticed in real-world scenarios. For the same unnoticeable consideration, the feature of the injected nodes is designed to be similar to that of the clean node features. For each injected node, we calculate the mean of the features as and apply the Gaussian noise on the averaged features . The features of the injected node are similar to the features in clean graph. We leave the generation of node features as future work. As the other baselines method could not modifies the adversarial labels of the injected nodes, we also provide the variant model NIPA-w/o which doesn’t manipulate the adversarial labels for fair comparison. The adversarial labels are randomly generated within for the baseline methods. In both NIPA and NIPA-w/o, we set the discount factor and the injected nodes are only appear in training phase in all of the methods.

The averaged results with standard deviation for all methods are reported in Table 

2. From Table 3 and 2, we could observe that (1) In all attacking methods, more injected nodes could better reduce the node classification accuracy, which satisfy our expectation. (2) Compared with Random and Preferential attack, FGA is relatively more effective in attacking the graph, though the performance gap is marginal. This is because random attack and preferential attack don’t learn information from the clean graph and just insert fake nodes following predefined rule. Thus, both of the methods are not as effective as FGA which tries to inject nodes through a way to decrease the performance. (3) The proposed framework outperforms the other methods. In particular, both FGA and NIPA are optimization based approach while NIPA significantly outperforms FGA, which implies the effectiveness of the proposed framework by designing hierarchical deep reinformcent learning to solve the decision making optimization problem. (4) NIPA out performances NIPA-w/o, which shows the necessity of optimizing w.r.t to labels for node injection attack.

Dataset Gini Coefficient Characteristic Path Length Distribution Entropy Power Law Exp. Triangle Count
0.00 0.3966 0.0000 6.3110 0.0000 0.9559 0.0000 1.8853 0.0000 1558.0 0.0
0.01 0.4040 0.0007 6.0576 0.1616 0.9549 0.0004 1.8684 0.0016 1566.2 7.4
CORA 0.02 0.4075 0.0002 6.1847 0.1085 0.9539 0.0002 1.8646 0.0006 1592.0 17.4
0.05 0.4267 0.0014 5.8165 0.1018 0.9458 0.0009 1.8429 0.0027 1603.8 12.8
0.10 0.4625 0.0005 6.1397 0.0080 0.9261 0.0007 1.8399 0.0017 1612.4 22.2
0.00 0.4265 0.0000 9.3105 0.0000 0.9542 0.0000 2.0584 0.0000 1083.0 0.0
0.01 0.4270 0.0012 8.3825 0.3554 0.9543 0.0001 2.0296 0.0024 1091.2 6.6
CITESEER 0.02 0.4346 0.0007 8.3988 0.2485 0.9529 0.0005 2.0161 0.0007 1149.8 32.4
0.05 0.4581 0.0026 8.0907 0.7710 0.9426 0.0009 1.9869 0.0073 1174.2 42.8
0.10 0.4866 0.0025 7.3692 0.6818 0.9279 0.0012 1.9407 0.0088 1213.6 61.8
0.00 0.6037 0.0000 6.3369 0.0000 0.9268 0.0000 2.1759 0.0000 12520.0 0.0
0.01 0.6076 0.0005 6.3303 0.0065 0.9253 0.0004 2.1562 0.0013 12570.8 29.2
PUBMED 0.02 0.6130 0.0006 6.3184 0.0046 0.9213 0.0004 2.1417 0.0009 13783.4 101.8
0.05 0.6037 0.0000 6.3371 0.0007 0.9268 0.0000 2.1759 0.0001 14206.6 152.8
0.10 0.6035 0.0003 6.2417 0.1911 0.9263 0.0010 2.1686 0.0141 14912.0 306.8
Table 4. Statistics of the clean graph and the graphs poisoned by NIPA averaged over 5 runs.

5.3. Key Statistics of the Poisoned Graphs

To answer RQ2, we analyze some key statistics of the poisoned graphs, which helps us to understand the attacking behaviors. One desired property of the poisoning graph is that the poisoned graph has similar graph statistics to the clean graph. We use the same graph statistics as that used in (Bojchevski et al., 2018) to measure the poisoned graphs for the three datasets. The results are reported in Table LABEL:table:poisoned_statistic. It could be concluded from the graph statistics that (1) Poisoned graph has very similar graph distribution to the clean graph. For example, the similar exponent of the power law distribution in graph indicates that the poisoned graph and the clean graph shares the similar distribution. (2) More injected nodes would make the poisoning attack process noticeable. The results show that with the increase of , the poisoned graph becomes more and more diverse from the origin graph. (3) The number of triangles increases, which shows that the attack not just simply connect fake nodes to other nodes, but also connect in a way to form triangles so each connection could affects more nodes.

5.4. Attack Effects Under Different Scenarios

In this subsection, we conduct experiments to answer RQ3, i.e., how effective the attack by NIPA is under different scenarios.

5.4.1. Average Degrees of Injected Nodes

As we discussed that the budget is essential to the poisoning attack, we investigate the node classification accuracy by varying the average degree of injected nodes as . The experiment results when and on CITESEER and CORA are shown in Fig. 3(a) and Fig. 3(b), respectively. From the figures, we observe that as the increase of the average degree of the injected nodes, the node classification accuracy decrease sharply, which satisfies our expectation because the more links a fake node can have, the more likely it can poison the graph.

Figure 3. Node classification performance on (a) CITESEER and (b) CORA by varying average node degree of injected nodes

5.4.2. Sparsity of the Origin Graph

We further investigate how the proposed framework works under different sparsity of the network. Without loss of generality, we set average degree of injected node as the average degree of the real node. To simulate the sparsity of the network, we randomly remove edges from the original graph. The results with and on CITSEER and CORA are shown in Fig.4. The results show that as the graph becomes more spare, the proposed framework is more effective in attacking the graph. This is because as the graph becomes more sparse, each node in the clean graph has less neighbors, which makes the it easier for fake nodes to change the labels of unlabeled nodes.

Figure 4. Node classification performance on (a) CITESEER and (b) CORA with varying graph sparsity

6. Conclusion

In this paper, we study a novel problem of non-target graph poisoning attack via node injection. We propose NIPA a deep reinforcement learning based method to simulate the attack process and manipulate the adversarial edges and labels for the injected nodes. Specifically, we design reward function and hierarchy DQNs to better communicate with the reinforcement learning environment and perform the poisoning attack. Experimental results on node classification demonstrate the effectiveness of the proposed framework for poisoning the graph. The poisoned graph has very similar properties as the original clean graph such as gini coefficient and distribution entropy. Further experiments are conducted to understand how the proposed framework works under different scenarios such as very sparse graph.

There are several interesting directions need further investigation. First, in this paper, we use mean values of node features as the feature of fake nodes. We would like to extend the proposed model for simultaneously generate features of fake nodes. Second, we would like to extend the proposed framework on more complicated graph data such as heterogeneous information network and dynamic network.


  • (1)
  • Aggarwal (2011) Charu C Aggarwal. 2011. An introduction to social network data analytics. In Social network data analytics. Springer, 1–15.
  • Allcott and Gentzkow (2017) Hunt Allcott and Matthew Gentzkow. 2017. Social media and fake news in the 2016 election. Journal of economic perspectives 31, 2 (2017), 211–36.
  • Barabási and Albert (1999) Albert-László Barabási and Réka Albert. 1999. Emergence of scaling in random networks. science 286, 5439 (1999), 509–512.
  • Bhagat et al. (2011) Smriti Bhagat, Graham Cormode, and S Muthukrishnan. 2011. Node classification in social networks. In Social network data analytics. Springer, 115–148.
  • Biggio et al. (2012) Battista Biggio, Blaine Nelson, and Pavel Laskov. 2012.

    Poisoning attacks against support vector machines. In

    29th Int’l Conf. on Machine Learning (ICML)

  • Biggio and Roli (2018) Battista Biggio and Fabio Roli. 2018.

    Wild patterns: Ten years after the rise of adversarial machine learning.

    Pattern Recognition 84 (2018), 317–331.
  • Bojchevski and Günnemann (2018) Aleksandar Bojchevski and Stephan Günnemann. 2018. Deep Gaussian Embedding of Graphs: Unsupervised Inductive Learning via Ranking. In International Conference on Learning Representations.
  • Bojchevski et al. (2018) Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, and Stephan Günnemann. 2018. Netgan: Generating graphs via random walks. arXiv preprint arXiv:1803.00816 (2018).
  • Cai et al. (2017) Han Cai, Kan Ren, Weinan Zhang, Kleanthis Malialis, Jun Wang, Yong Yu, and Defeng Guo. 2017. Real-time bidding by reinforcement learning in display advertising. In Proceedings of the Tenth ACM International Conference on Web Search and Data Mining. ACM, 661–670.
  • Chen et al. (2018) Jinyin Chen, Yangyang Wu, Xuanheng Xu, Yixian Chen, Haibin Zheng, and Qi Xuan. 2018. Fast gradient attack on network embedding. arXiv preprint arXiv:1809.02797 (2018).
  • Dai et al. (2016) Hanjun Dai, Bo Dai, and Le Song. 2016. Discriminative embeddings of latent variable models for structured data. In International conference on machine learning. 2702–2711.
  • Dai et al. (2018) Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, and Le Song. 2018. Adversarial attack on graph structured data. arXiv preprint arXiv:1806.02371 (2018).
  • Do et al. (2019) Kien Do, Truyen Tran, and Svetha Venkatesh. 2019. Graph transformation policy network for chemical reaction prediction. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 750–760.
  • Erdős and Rényi (1960) Paul Erdős and Alfréd Rényi. 1960. On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci 5, 1 (1960), 17–60.
  • Giles et al. (1998) C Lee Giles, Kurt D Bollacker, and Steve Lawrence. 1998. CiteSeer: An Automatic Citation Indexing System.. In ACM DL. 89–98.
  • Gleave et al. (2019) Adam Gleave, Michael Dennis, Neel Kant, Cody Wild, Sergey Levine, and Stuart Russell. 2019. Adversarial Policies: Attacking Deep Reinforcement Learning. arXiv preprint arXiv:1905.10615 (2019).
  • Goodfellow et al. (2015) Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In International Conference on Learning Representations.
  • Jun et al. (2018) Kwang-Sung Jun, Lihong Li, Yuzhe Ma, and Jerry Zhu. 2018. Adversarial attacks on stochastic bandits. In Advances in Neural Information Processing Systems. 3640–3649.
  • Kipf and Welling (2016) Thomas N Kipf and Max Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016).
  • Li et al. (2016) Bo Li, Yining Wang, Aarti Singh, and Yevgeniy Vorobeychik. 2016. Data poisoning attacks on factorization-based collaborative filtering. In Advances in neural information processing systems. 1885–1893.
  • Ma et al. (2018) Yuzhe Ma, Kwang-Sung Jun, Lihong Li, and Xiaojin Zhu. 2018. Data poisoning attacks in contextual bandits. In

    International Conference on Decision and Game Theory for Security

    . Springer, 186–204.
  • McCallum et al. (2000) Andrew Kachites McCallum, Kamal Nigam, Jason Rennie, and Kristie Seymore. 2000. Automating the construction of internet portals with machine learning. Information Retrieval 3, 2 (2000), 127–163.
  • Mei and Zhu (2015) Shike Mei and Xiaojin Zhu. 2015. Using Machine Teaching to Identify Optimal Training-Set Attacks on Machine Learners. In

    The 29th AAAI Conference on Artificial Intelligence

  • Mnih et al. (2015) Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. Nature 518, 7540 (2015), 529.
  • Pan et al. (2016) Shirui Pan, Jia Wu, Xingquan Zhu, Chengqi Zhang, and Yang Wang. 2016. Tri-party deep network representation. Network 11, 9 (2016), 12.
  • Schulman et al. (2015) John Schulman, Sergey Levine, Pieter Abbeel, Michael Jordan, and Philipp Moritz. 2015. Trust region policy optimization. In International conference on machine learning. 1889–1897.
  • Sutton and Barto (2018) Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction.
  • Szegedy et al. (2013) Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013).
  • Wang et al. (2016) Daixin Wang, Peng Cui, and Wenwu Zhu. 2016. Structural deep network embedding. In Proceedings of the 22nd ACM SIGKDD international conference on Knowledge discovery and data mining. ACM, 1225–1234.
  • Watkins and Dayan (1992) Christopher JCH Watkins and Peter Dayan. 1992. Q-learning. Machine learning 8, 3-4 (1992), 279–292.
  • Wei et al. (2017) Zeng Wei, Jun Xu, Yanyan Lan, Jiafeng Guo, and Xueqi Cheng. 2017. Reinforcement learning to rank with Markov decision process. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. ACM, 945–948.
  • Wu et al. (2019) Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, kai Lu, and Liming Zhu. 2019. Adversarial Examples on Graph Data: Deep Insights into Attack and Defense. In Proceedings of the 28th International Joint Conference on Artificial Intelligence.
  • Xiao et al. (2015) Huang Xiao, Battista Biggio, Gavin Brown, Giorgio Fumera, Claudia Eckert, and Fabio Roli. 2015. Is feature selection secure against training data poisoning?. In International Conference on Machine Learning. 1689–1698.
  • You et al. (2018) Jiaxuan You, Bowen Liu, Zhitao Ying, Vijay Pande, and Jure Leskovec. 2018. Graph convolutional policy network for goal-directed molecular graph generation. In Advances in Neural Information Processing Systems. 6410–6421.
  • Zügner et al. (2018) Daniel Zügner, Amir Akbarnejad, and Stephan Günnemann. 2018. Adversarial Attacks on Neural Networks for Graph Data. In SIGKDD. 2847–2856.
  • Zügner and Günnemann (2019) Daniel Zügner and Stephan Günnemann. 2019. Adversarial Attacks on Graph Neural Networks via Meta Learning. In International Conference on Learning Representations (ICLR).