Graph is a core component for many important applications ranging from recommendations and customer type analysis in social networks to anomaly detection, behavior analysis in sensor networks 
. Even if graph is not explicitly given, estimated latent graph can be helpful for those applications because the graph gives them relationships and interactions between nodes  . One of the most frequently applied tasks on graph data is node classification: given a single large attributed graph and the class labels of subset of nodes in the graph, how to predict the labels of the remaining nodes.
The last years, deep neural networks for large graphs have achieved great performance in node classification problems   . One of the well-known approaches in node classification is graph convolutional neural networks (GCNs). GCNs utilize not only node features, but relational information on graph to perform classification task.
On the other hand, recent years many researchers noticed that deep learning architectures can easily be fooled
. Even only slight, deliberate perturbations, it can lead a machine learning model to misclassification . The perturbation is called adversarial perturbation and a sample with the perturbation is known to adversarial example. The adversarial examples is a potentially critical safety issues in any machine learning based systems. Therefore studies about generating adversarial examples are important to evaluate robustness of the target machine learning models    . As well as typical deep learning architectures, GCNs are also highly vulnerable in adversarial perturbations .
Against typical neural networks, adversarial perturbations for GCNs have several particular characteristics. First, we can add perturbations on both features and edges. Second, we can lead misclassification by not only direct perturbations on the target, but indirect perturbations on the target’s neighbors. 
proposed an adversarial perturbation method to perform both direct and indirect attacks for GCNs in semi-supervised learning setting. The indirect attack iteratively perturbs either a feature or an edge for given number of 1-hop neighbors.
Additionally, we consider about another possibility of adversarial examples on graphs. A series of graph convolutions delivers nodes’ information through series of edges. Thus, even if there is no direct connection, the poisoned information possibly influence a node far from the poisoned node. Hence, not only around directly connected neighbors, but we also consider about adversary robustness against such indirect attacks from remote nodes. Further, to evaluate the upper bound of the robustness of GCNs, we need an attack which is sufficiently strong to deceive them. A strong attack against GCNs is perturbations on few nodes far from the target. To evaluate the robustness of node classifiers with graph convolutions, we need a method to generate such strong indirect perturbations.
In this work, we attempt to close the gaps. The question we want to solve is: given an attributed graph and its node classifier with GCN layers, how can we craft high-confidence adversarial perturbation which leads misclassification into a target node thorough poisoning a single node far from the target? Towards evaluating adversary robustness of GCNs, the problem addressed here has significant importance.
Present Work. To answer these questions, we introduce an adversarial perturbation method PoisonProbe which poisons just a single node’s features to lead misclassification into a target node far more than one-hop from the poisoned node. The proposed method enable us to evaluate GCNs’ robustness against indirect adversarial perturbations.
Contributions. This paper makes the following contributions:
We introduce a new attack named PoisonProbe which deceive node classifiers with GCN. PoisonProbe poisons a node’s features to lead misclassification into a target far more than 1-hop from the poisoned node.
We also introduce an approach to find the poisoning node that has high chance to result in the smallest perturbations than other candidates.
Our proposed attack is significantly more effective than previous approach. In our experiments, proposed method with poisoning randomly selected node shows at least 92% attack success rate within two-hops from the target for two datasets. Further, the proposed method with the poisoning node selection shows 99% attack success rate at two-hops.
We reveal that -layer GCNs have chance to be deceived by our attack within -hops from the target.
The proposed attack can be used as a benchmark in future defense attempts to develop graph convolutional neural networks with having adversary robustness.
Ii Related Work
Deep Learning for Graphs. Researches of deep learning for graphs can be distinguished in two parts: node embeddings    and graph neural networks   . We focus on the latter, especially graph convolutional neural networks (GCNs) and adversarial attacks for them.
While many classical approaches have been introduced in the past, the last years, deep neural networks for large graphs have achieved great performance in node classification problems . The core idea behind GCNs is to learn how to aggregate feature information over local graph neighborhoods using neural networks. A single graph convolution operation aggregates feature information over a node’s one-hop neighbors on a graph, and by stacking multiple those convoluted information can be propagated through the graph.
Under such GCNs, as well as typical deep learning architectures, GCNs are also vulnerable in adversarial perturbations . This paper also tackles crafting adversarial perturbations under GCNs, but focus on indirect perturbations on a node far from a target node.
Adversarial Examples. Adversarial example is a crafted input for deceiving deep neural networks . It is a potentially critical safety issues in machine learning based systems. Therefore, we need to evaluate adversary robustness at developing machine learning models. One of simple defense approach against adversarial examples is to mask gradients . However, it provides a false sense of security  . Adversarial training , which injects adversarial examples with correct labels into training samples, is a simple way to re-train a model. Recently, several certified robust learning approaches have been proposed   . The certified defense mechanisms practically do not have enough robustness at all, but give certifiable robustness around training points.
On the other hands, studies about generating adversarial examples / perturbations are important to evaluate robustness of the target machine learning models    . The most well known method is FGSM (Fast Gradient Sign Method) . FGSM finds adversarial perturbations optimized for the distance metric. FGSM is a light-weight method to craft adversarial perturbations. Thus, it is a standard way to try finding adversarial examples.  proposed an iterative approach extending FGSM. Deepfool is an untargeted attack optimized for the distance metric in an efficient way. CW Attack  discovers an adversarial perturbation with small size of perturbations. CW Attack has variants for the , and distance metrics. This paper we assume for CW Attack. CW Attack is used as a standard benchmark of adversary robustness of neural networks.
Adversarial Perturbations on Graphs. Works on adversarial attacks for graph learning tasks are only a few works. 
introduced an adversarial attack which exploits reinforcement learning ideas through deleting edges. proposed untargeted attack for graph via meta learning.  proposed Nettack which is a crafting adversarial perturbation on graph to deceive its node classifier through perturbing both features and edges. Nettack can deceive a node’s classification result in two ways: direct and indirect. Nettack’s direct attack perturbs both features and edges of the target node. While indirect attack called influencer attack picks the target’s 1-hop neighbors (called influencer nodes), then iteratively perturbs either a feature or an edge of influencer nodes with in budget of perturbations. Nettack prefers to choose a feature to perturb in early iterations, because adding/removing edge can close to successful adversarial example than perturbing a single feature. The one by one perturbations for features is not powerful to find successful adversarial examples. On the other hand, the perturbations of Nettack seem data poisonings on a graph aiming to deceive node classifiers.  proposed robust graph convolutional networks (RGCN) against adversarial attacks.
Difference against Existing Works. In this paper we mainly study on adversarial perturbations on a single node far from the target more than one-hop, whereas Nettack perturbs multiple one-hop neighbors for indirect attacks. Since typical GCNs have two-layers of graph convolutions, there is chance to deliver poisoned feature information from two-hops and more. Our goal is to clarify a potential vulnerability of GCNs, and make more strong attack for evaluating adversary robustness of GCNs. Such strong attacks are important to measure the robustness of GCNs including RGCN . We also assume no changes on graph structure. We have many applications assuming stable graph. Sensor network is one of those applications. We here study about vulnerability of GCNs under the assumption of no structural changes. Further, Our study do not explicitly assume the perturbation budget which Nettack introduced, but the proposed method can perform like having the budget by rejecting the results over the budget.
We consider the task of (semi-supervised) node classification in a single large graph having binary node features. Formally, let be an attributed graph, where is the adjacency matrix representing the connections and represents the nodes’ features.
Node Classification with GCNs.
GCN (Graph Convolutional Neural Network) is a semi-supervised learning method to classify nodes, given feature matrix , adjacency matrix and labels for subset of nodes in the graph. We have several variants of GCN, but we assume the most common way . The GCN is defined as follows:
is the identity matrix,, and
is an activation function. We assume. Here, let be the output of neural network that is the output before Softmax layer as . A commonly used application is two-layer GCN. That is, , where .
Adversarial Examples against GCNs.
We introduce adversarial examples on graph against GCNs. Let positive adversarial example be an adversarial example which satisfies the following condition:
where is ’ logit value of class . Positive adversarial examples cause misclassification into the target node to be the target class . Untargeted attack is also described as:
where is ’s legitimate output .
We ensure the modification by adversarial perturbations yields a valid input, we have a constraint on modification as .  introduced following change-of-variables:
Since , it follows that . In the above notation, we can optimize over to find valid solution with a smoothing effect of clipped gradient descent that eliminates the problem of getting stuck in extreme regions. This method allows us to use an optimization algorithm that does not natively support box constraints. We also employ Adam optimizer  with this box constraints in our attack.
Iv Proposed Node Poisoning
Given the node classification setting described in section III, our goal is to find small perturbations on features of a node on a graph and simultaneously to lead misclassification into the other node even when those two nodes have no direct connection. Hence, we assume , where is the target node and is the node which we add the perturbations. We also assume no structural changes on graph.
To solve the problem above, an optimization based approach is introduced in section IV-A. We also introduce poisoning node selection that discovers the poisoning node that has high chance to result in the smallest perturbations than other candidates (section IV-B).
Here we propose a new attack PoisonProbe that solves the above problem. The objective function PoisonProbe solves is defined as
is feature vector of, , represents perturbations on , is the target class label, , is an unit vector that -th element is 1 and . We then try to solve the following optimization problem to find the high-confidence adversarial perturbations on targeting .
In the optimization, we employ the transformation (3) for to find valid solution with a smoothing effect of clipped gradient descent that eliminates the problem of getting stuck in extreme regions. To find indirect adversarial perturbations, we need to estimate gradient of over which is a transformed variable from . In case the connection between and is indirect, perturbation on is propagated via graph convolution into with damping. Hence, it is not easy to find the solution that turn ’s output into the targeted class. To find the valid solution satisfying (5), we employ binary search for discovering effective value of .
The detail algorithm of PoisonProbe is described in Algorithm 1. The proposed algorithm consists of outer loop and inner loop. The inner loop discovers smaller perturbations which can achieve desired targeted attack under the several parameters are fixed. Then, the outer loop discovers the parameters iteratively by utilizing binary search. If the inner loop successfully find the adversarial perturbation, it decrease the constant . While, if the inner loop cannot find it, it increase , that means it focuses on finding perturbations satisfying (5) rather than reducing size of perturbations next.
Iv-B Poisoning Node Selection
Here we discuss about how to choose the poisoning nodes to achieve the misclassification with smaller perturbations. On a graph convolutional neural network, a graph convolution layer aggregates features of both a (center) node and its 1-hop neighbors. Here we assume every 1-hop neighbor can equally deliver its features to the center node through graph convolution.
(Poisoning efficiency of 1-hop neighbor) Let be the set of 1-hop neighbors around . Poisoning efficiency of to lead misclassification towards is defined by
Next, we consider about the poisoning efficiency of a node far more than 1-hop. To simplify, we only consider shortest paths that are paths ignoring edges between nodes where are the same distance from the target. This simplification enables us to transform the graph into the tree whose root is the target . We call the tree neighborhood tree.
(Poisoning efficiency of -hop neighbor) Let be the set of -hop neighbors around and be the ancestor node of in the neighborhood tree whose root is . Poisoning efficiency of to lead misclassification towards is defined as
Actually (6) is better interpretation of poisoning efficiency of a 1-hop neighbor than (8). However we utilize (8) for poisoning efficiency of a -hop neighbor because (6) is the equivalent for all 1-hop neighbors.
The score of poisoning efficiency defined the above looks like band-width of the path delivering poisoned information from the candidate to the target. If the score is small, the poisoned information will be shrunk through the path. Therefore, we need to enlarge the poisoned information to achieve the adversarial attack. While if we can select the path whose poisoning efficiency is high, we have chance to reduce the amount of perturbations to do it.
From ’s -hop neighbors , we pick a node which has the maximum poisoning efficiency:
In case there is multiple nodes having the maximum poisoning efficiency, we randomly select one node from them. For the 1-hop neighbors, we always randomly pick a node from because all the candidates share the same poisoning efficiency.
We here describe simple extensions of PoisonProbe.
V-a Extension 1: Multiple Node Perturbation
We introduce an extension to perturb multiple nodes. We here describe the difference fom the original PoisonProbe.
V-A1 Targeted Perturbation
V-A2 Multiple Node Selection
If we desire to perturb nodes, a simple solution is to choose the node having the best poisoning efficiency from the rest of candidates times.
V-B Extension 2: Suppression of Infections
This is an extension of PoisonProbe. Whenever adding perturbations on a node, there is possibilities to propagate the poisoned information through graph convolutions. We call this unfortunate propagation infections.
To mitigate the number of infected nodes, we introduce a penalty into (4). The penalty is defined as:
where is a set of nodes except and , is non-deceived output label. This is the penalty to add loss if the output label is changed via the perturbations. Finally, we solve the following objective function:
This section demonstrates the effectiveness of our proposed attack PoisonProbe with two datasets. The experimental evaluations were designed to answer following questions:
How successful is our method in leading misclassification into node classifiers with GCNs?
How far nodes can our method perform success from?
How successful is our method in choosing the poisoning node which achieve adversarial attack with smaller perturbations?
As well as , we utilize CORA-ML  and CiteSeer networks as in , whose characteristics are described in Table I. We split the network in labeled (20%) and unlabeled nodes (80%). We further split the labeled nodes in equal parts training and validation sets to train the node classifiers which our attack try to deceive.
We employ well-known graph convolutional neural networks, GCN with two graph convolutional layers with semi-supervised setting described above  111https://github.com/tkipf/pygcn. In the following evaluations, we also use GCNs with 3 layers and 4 layers. The detail of the model architectures of the those GCNs are described in Table II
. In the training for those GCNs, we set learning rate is 0.01, dropout rate is 0.5. We iterate training within 200 epochs.
Setting of Our Attack
Our attack PoisonProbe iteratively searches adversarial perturbations with smaller size of modifications by binary search. We set max_search_steps=9, max_iter=1000, , , , where max_search_steps represents number of binary search steps, represents initial value of . We developed PoisonProbe
in Python 3.6 and PyTorch 1.0.0.
We employ Nettack’s indirect attack to compare effectiveness with proposed attack. The indirect attack automatically chooses given number of influencer nodes, which are nodes around the target. We set the number of influencer nodes as 1. Since Nettack perturbs number of features within given budget, we utilize linear search to find positive adversarial examples. The linear search iteratively increase the budget from 1 while score (2) is decreased and positive adversarial is not found.
GConv + ReLU ()
|GConv + ReLU (256)||✓|
|GConv + ReLU (64)||✓||✓|
Vi-a Attack Success Rate
Here we answer the question: How successful is our method in leading misclassification into the target node from its neighbors?
To evaluate the effectiveness of our PoisonProbe, we measure attack success rate which is the fraction of positive adversarial examples whose size of perturbations are less than threshold. The attack success rate is defined as follows:
We measure the attack success rates for each distance of poisoning neighbors. To measure the attack success rates, we crafted 200 adversarial perturbations through randomly choosing triples of (target node, target class, poisoning node) for each case. Here we do not employ the proposed poisoning node selection.
We show the attack success rate under GCN(2) for Cora-ML and CiteSeer in Figure 3 and Figure (b)b respectively. On the both figure we plot the attack success rates when PoisonProbe poisons 1-hop neighbors (blue line) and 2-hop neighbors (yellow line).
Figure 3 shows very high attack success rates for both at 1-hop and 2-hop neighbors. In 1-hop neighbors, even when the perturbation size is less 1.0(=), the attack success rate is more than 90 . Poisoning 1-hop neighbors also achieve complete attack success after the perturbation is around 50. Adversarial perturbations on 2-hop neighbors show 92 attack success rate in total. The fact that we can deceive any nodes’ classification results through poisoning single node which have no direct connections between the target is very important. Figure (b)b also shows very high attack success rates. Both poisoning 1-hop and 2-hop neighbors show complete attack success at the end. When the perturbations is less than 1.0, 1-hop shows more than 95 and 2-hop shows around 80 attack success rate respectively. Tables (a)a and (b)b shows overall attack success rate compare with Nettack. Our attack shows higher success rate than Nettack with 1-influencer setting which iteratively perturbs a feature.
Remarkable thing here is two-layers GCN can be deceived classification results from poisoning nodes at 2-hops far from the target node. We can say that GCNs are vulnerable not only modifications of directly connecting neighbors but nodes at 2-hops far. In social networks, 2-hops neighbors are friends of friends. Most of them are unknown instances that we do not care about. Thus, it is very hard to notice about becoming a victim. Graph convolutional neural networks are very powerful machine learning tools, but we need to consider risks against adversarial perturbations.
Vi-B Perturbations on Remote Nodes
Beyond 2-hops from the target, can PoisonProbe generate positive adversarial perturbations?
Table (a)a and (b)b demonstrate the attack success rates of poisoning 2-hop neighbors under GCN(2), 3-hop neighbors under GCN(3) and 4-hop neighbors under GCN(4). On CORA-ML, the success rate at 3-hops under GCN(3) is more than 50 while 0 under GCN(2). Similarly, the success rate at 4-hops under GCN(4) is 17 while 0 under GCN(2) and GCN(3). Fig 3 and (b)b also demonstrate the attack success rate along with the perturbation size. Positive adversarials at 3-hops consumed much more perturbations than 2-hops. Similarly, positive adversarials at 4-hops consumed lots of perturbations as well.
Based on the above experimental results, we can say that it is possible to craft possible adversarials at nodes far from the target even when we built multi-layer GCN, but PoisonProbe cannot craft positive adversarials in high-confidence. We demonstrated that -layer GCN could be deceived within -hop neighbors.
Vi-C Effectiveness of Poisoning Node Selection
Here we answer the question: How successful is our method in choosing the poisoning node which achieve adversarial attack with smaller perturbations?
To evaluate the effectiveness of our poisoning node selection, we measure the attack success rate and the average size of perturbations. We compare the attacks which perturb top-1 node in poisoning efficiency, top-2 nodes, top-3 nodes, bottom-1 and random from 2-hop neighbors around each target. Result of top-k is made by the attacks perturbing nodes having top-k scores. Random is identical to the result in Fig 3. In this evaluation, we crafted 200 adversarial perturbations through randomly choosing (target node, target class) for each case as well.
Observation 1 (Success Rate with Node Selection)
In Figure (a)a, PoisonProbe with choosing top-1 outperforms the one with random selection and bottom-1. At the perturbation size is 1.0, PoisonProbe with top-1 shows around 65 % success rate. It is higher than PoisonProbe with random selection, which shows around 40 % in Figure 3. Figure (b)b also shows very high attack success rates. The differences between the top-1 and the bottom-1 are also large. In Figure (a)a and (b)b, PoisonProbe with poisoning top-2 and top-3 nodes outperform the top-1. Those attacks enable us to craft higher confidence attack at a level of perturbation. At the perturbation size is 1.0, PoisonProbe with top-3 nodes shows over 80 % success rate for both two data.
Next, to check the effectiveness of our poisoning node selection, we measure recall in finding the node which give us the smallest perturbation. Figure 5 shows the recall at top k highest poisoning efficiency nodes. In this evaluation, we randomly pick 200 target nodes. For each target, we compare all attacks which perturb single node having different poisoning efficiency in 2-hop neighbors. We attempt all attacks using single poisoning node chosen from that satisfies . Note that, in our pre-study, the nodes sharing the same poisoning efficiency tended to result in very similar size of perturbations.
Observation 2 (Recall in Discovering the Smallest Perturbation Node)
Figure 5 shows that 85% of attacks achieved the smallest perturbations on Cora-ML, and 90% on Citeseer. At top-2, our method got more than 96% recall.
We further evaluate rank correlation between ranks in poisoning efficiency and ranks in the size of perturbations. To measure the rank correlation, we compute Spearman’s rank correlation coefficient  defined as follows:
where is the difference between two ranks of and . The output is within [-1,1]. +1 indicates a perfect association, 0 indicates no association, -1 indicates a perfect negative association between two ranks.
Observation 3 (Rank correlation between poisoning efficiency and perturbation size)
In Table IV, mean of the rank correlation between the rank of poisoning candidate nodes in poisoning efficiency and in perturbation size are more than 0.9. Thus, these two ranks have very strong association. Therefore, our proposed method with the poisoning node selection is effective to craft adversarial attacks with smaller perturbations in many cases.
From the above results, our method can successfully choose the poisoning node which needs small perturbations to deceive the node classifier. The proposed poisoning node selection is a heuristic way with considering how much information can be delivered from a candidate to the target. Our simple, intuitive and light-weight node selection helpsPoisonProbe to achieve high-confident adversarial perturbations with small noises.
We evaluate how many nodes are infected by the adversarial perturbations. In the adversary’s view, smaller number of infected nodes is better to conceal her malicious activity.
Table V shows statistics about number of infected nodes when adding perturbations on 1-hop or 2-hop nodes. We also measure the statistics with and without the penalty introduced in (10). In case we turn on the penalty, we set on (11). While, represents proposed method with no penalty. We evaluate number of infections for CORA-ML dataset. Here we do not mention about CiteSeer dataset because the results had very small number of infections.
First we look at the results without the penalty. Adversarial attacks from 1-hop neighbors infect a few nodes. Attacks from 2-hop neighbors turn much more number of nodes into wrong results than 1-hops. Next, we proceed into the PoisonProbe with the penalty. For both 1-hop and 2-hop, the number of infection nodes are decreased. In 1-hop, the number of zero infections (that is zeros in Table V) are increased.
Since perturbation size of 2-hop neighbors’ attacks is larger than 1-hop, the number of infection nodes is also increased. Fortunately, PoisonProbe with the penalty can mitigate the infections. However, it have not revealed significant benefits yet. Finding more effective value of needs more studies.
Table (a)a and Table (b)b show the average size of perturbations (L2 loss), overall attack success rate, and number of infections, for PoisonProbe with poisoning different types of nodes. Those success rates are higher than PoisonProbe with random choices (Table (a)a and (b)b) The size of perturbations of top-1 is more than 10 times smaller than bottom-1 for each data. When we perturb several nodes’ features, PoisonProbe has much more chance to reduce the size of perturbations because it may have more freedom to perturb. However, number of infections are increased (Table (a)a and (b)b).
Towards evaluating adversary robustness of GCNs, we tackled the question; can we generate effective adversarial perturbations on a node far from the target? We introduced a new attack named PoisonProbe which poisons a node’s features to lead misclassification into a target far more than one-hop. We also introduced an approach to discover the poisoning node with smaller perturbations. In our evaluations, attack success rates of the proposed attack were at most 100% from 1-hop neighbors and 92% from two-hop neighbors in Cora-ML dataset by poisoning single randomly selected node. The proposed attack can be used as a benchmark in future defense attempts to develop graph convolutional neural networks with robustness against indirect adversarial perturbations.
-  (2018) Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. In ICML, Cited by: §I, §II.
-  (2018) Deep gaussian embedding of graphs: unsupervised inductive learning via ranking. In ICLR, Cited by: §II.
-  (2017) Towards evaluating the robustness of neural networks. In IEEE Symposium on Security and Privacy, Cited by: §I, §II, §II, §III.
-  (2018) Adversarial attack on graph structured data. ICML. Cited by: §II.
-  (2017) Robust physical-world attacks on deep learning models. arXiv preprint arXiv:1707.08945. Cited by: §I, §II.
-  (2018) Graph mining-based trust evaluation mechanism with multidimensional features for large-scale heterogeneous threat intelligence. In IEEE BigData, pp. 1272–1277. Cited by: §I.
-  (2014) Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572. Cited by: §I, §II.
-  (2016) Node2vec: scalable feature learning for networks. In KDD, pp. 855–864. Cited by: §II.
-  (2017) Toeplitz inverse covariance-based clustering of multivariate time series data. In KDD, pp. 215–223. Cited by: §I.
-  (2017) Inductive representation learning on large graphs. In NIPS, Cited by: §I, §II.
-  (2019) GlassMasq: adversarial examples masquerading in face identification systems with feature extractor. In PST, pp. 1–7. Cited by: §I, §II.
-  (2014) Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: §III.
-  (2018) Neural relational inference for interacting systems. In ICML, Cited by: §I, §II.
-  (2017) Semi-supervised classification with graph convolutional networks. In ICLR, Cited by: §I, §II, §III, §VI.
-  (2016) Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. Cited by: §II.
-  (2018) Towards deep learning models resistant to adversarial attacks. In ICLR, Cited by: §II.
-  (2000) Automating the construction of internet portals with machine learning. Information Retrieval 3 (2), pp. 127–163. Cited by: §VI, TABLE I.
-  (2018) Differentiable abstract interpretation for provably robust neural networks. In ICML, pp. 3575–3583. Cited by: §II.
-  (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In IEEE Symposium on Security and Privacy, Cited by: §II.
-  (2014) Deepwalk: online learning of social representations. In SIGKDD, pp. 701–710. Cited by: §II.
-  (2008) Collective classification in network data. AI magazine 29 (3), pp. 93–93. Cited by: §VI, TABLE I.
Accessorize to a crime: real and stealthy attacks on state-of-the-art face recognition. In CCS, Cited by: §I, §II.
-  (1904) The proof and measurement of association between two things. American journal of Psychology 15 (1), pp. 72–101. Cited by: §VI-C.
-  (2014) Intriguing properties of neural networks. In ICLR, Cited by: §I, §II.
Autocyclone: automatic mining of cyclic online activities with robust tensor factorization. In WWW, pp. 213–221. Cited by: §I.
-  (2019) Indirect adversarial attacks via poisoning neighbors for graph convolutional networks. In IEEE BigData, Cited by: Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks††thanks: This paper is the full version of ..
-  (2018) Lipschitz-margin training: scalable certification of perturbation invariance for deep neural networks. In NeurIPS, Cited by: §II.
-  (2018) Provable defenses against adversarial examples via the convex outer adversarial polytope. In ICML, pp. 5283–5292. Cited by: §II.
-  (2019) Robust audio adversarial example for a physical attack. In IJCAI, pp. 5334–5341. Cited by: §II.
-  (2018) Graph convolutional neural networks for web-scale recommender systems. In KDD, pp. 974–983. Cited by: §I, §II.
-  (2019) Fast and accurate anomaly detection in dynamic graphs with a two-pronged approach. In KDD, pp. 647–657. Cited by: §I.
-  (2019) Robust graph convolutional networks against adversarial attacks. In KDD, pp. 1399–1407. Cited by: §II, §II.
-  (2018) Adversarial attacks on neural networks for graph data. In KDD, Cited by: §I, §I, §II, §II, §VI, §VI.
-  (2019) Adversarial attacks on graph neural networks via meta learning. In ICLR, Cited by: §II.