Adversarial Attack on Large Scale Graph
Recent studies have shown that graph neural networks are vulnerable against perturbations due to lack of robustness and can therefore be easily fooled. Most works on attacking the graph neural networks are currently mainly using the gradient information to guide the attack and achieve outstanding performance. Nevertheless, the high complexity of time and space makes them unmanageable for large scale graphs. We argue that the main reason is that they have to use the entire graph for attacks, resulting in the increasing time and space complexity as the data scale grows. In this work, we propose an efficient Simplified Gradient-based Attack (SGA) framework to bridge this gap. SGA can cause the graph neural networks to misclassify specific target nodes through a multi-stage optimized attack framework, which needs only a much smaller subgraph. In addition, we present a practical metric named Degree Assortativity Change (DAC) for measuring the impacts of adversarial attacks on graph data. We evaluate our attack method on four real-world datasets by attacking several commonly used graph neural networks. The experimental results show that SGA is able to achieve significant time and memory efficiency improvements while maintaining considerable performance in the attack compared to other state-of-the-art methods of attack.
READ FULL TEXT