DeepAI AI Chat
Log In Sign Up

EDoG: Adversarial Edge Detection For Graph Neural Networks

12/27/2022
by   Xiaojun Xu, et al.
Georgia Institute of Technology
0

Graph Neural Networks (GNNs) have been widely applied to different tasks such as bioinformatics, drug design, and social networks. However, recent studies have shown that GNNs are vulnerable to adversarial attacks which aim to mislead the node or subgraph classification prediction by adding subtle perturbations. Detecting these attacks is challenging due to the small magnitude of perturbation and the discrete nature of graph data. In this paper, we propose a general adversarial edge detection pipeline EDoG without requiring knowledge of the attack strategies based on graph generation. Specifically, we propose a novel graph generation approach combined with link prediction to detect suspicious adversarial edges. To effectively train the graph generative model, we sample several sub-graphs from the given graph data. We show that since the number of adversarial edges is usually low in practice, with low probability the sampled sub-graphs will contain adversarial edges based on the union bound. In addition, considering the strong attacks which perturb a large number of edges, we propose a set of novel features to perform outlier detection as the preprocessing for our detection. Extensive experimental results on three real-world graph datasets including a private transaction rule dataset from a major company and two types of synthetic graphs with controlled properties show that EDoG can achieve above 0.8 AUC against four state-of-the-art unseen attack strategies without requiring any knowledge about the attack type; and around 0.85 with knowledge of the attack type. EDoG significantly outperforms traditional malicious edge detection baselines. We also show that an adaptive attack with full knowledge of our detection pipeline is difficult to bypass it.

READ FULL TEXT
09/20/2022

Sparse Vicious Attacks on Graph Neural Networks

Graph Neural Networks (GNNs) have proven to be successful in several pre...
08/20/2019

Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning

Graph neural networks (GNNs) are widely used in many applications. Howev...
10/23/2022

GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections

Graph neural networks (GNNs) have found successful applications in vario...
04/27/2022

SSR-GNNs: Stroke-based Sketch Representation with Graph Neural Networks

This paper follows cognitive studies to investigate a graph representati...
09/08/2020

Adversarial Attack on Large Scale Graph

Recent studies have shown that graph neural networks are vulnerable agai...
01/18/2021

GraphAttacker: A General Multi-Task GraphAttack Framework

Graph Neural Networks (GNNs) have been successfully exploited in graph a...
10/25/2022

Motif-Backdoor: Rethinking the Backdoor Attack on Graph Neural Networks via Motifs

Graph neural network (GNN) with a powerful representation capability has...