Robust Graph Neural Network Against Poisoning Attacks via Transfer Learning

08/20/2019
by   Xianfeng Tang, et al.
0

Graph neural networks (GNNs) are widely used in many applications. However, their robustness against adversarial attacks is criticized. Prior studies show that using unnoticeable modifications on graph topology or nodal features can significantly reduce the performances of GNNs. It is very challenging to design robust graph neural networks against poisoning attack and several efforts have been taken. Existing work aims at reducing the negative impact from adversarial edges only with the poisoned graph, which is sub-optimal since they fail to discriminate adversarial edges from normal ones. On the other hand, clean graphs from similar domains as the target poisoned graph are usually available in the real world. By perturbing these clean graphs, we create supervised knowledge to train the ability to detect adversarial edges so that the robustness of GNNs is elevated. However, such potential for clean graphs is neglected by existing work. To this end, we investigate a novel problem of improving the robustness of GNNs against poisoning attacks by exploring clean graphs. Specifically, we propose PA-GNN, which relies on a penalized aggregation mechanism that directly restrict the negative impact of adversarial edges by assigning them lower attention coefficients. To optimize PA-GNN for a poisoned graph, we design a meta-optimization algorithm that trains PA-GNN to penalize perturbations using clean graphs and their adversarial counterparts, and transfers such ability to improve the robustness of PA-GNN on the poisoned graph. Experimental results on four real-world datasets demonstrate the robustness of PA-GNN against poisoning attacks on graphs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/30/2022

GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks

Graph neural networks (GNNs) have been increasingly deployed in various ...
research
11/04/2022

An Adversarial Robustness Perspective on the Topology of Neural Networks

In this paper, we investigate the impact of neural networks (NNs) topolo...
research
10/29/2020

Reliable Graph Neural Networks via Robust Aggregation

Perturbations targeting the graph structure have proven to be extremely ...
research
12/27/2022

EDoG: Adversarial Edge Detection For Graph Neural Networks

Graph Neural Networks (GNNs) have been widely applied to different tasks...
research
11/13/2020

Learning to Drop: Robust Graph Neural Network via Topological Denoising

Graph Neural Networks (GNNs) have shown to be powerful tools for graph a...
research
03/20/2023

GNN-Ensemble: Towards Random Decision Graph Neural Networks

Graph Neural Networks (GNNs) have enjoyed wide spread applications in gr...
research
05/25/2022

TrustGNN: Graph Neural Network based Trust Evaluation via Learnable Propagative and Composable Nature

Trust evaluation is critical for many applications such as cyber securit...

Please sign up or login with your details

Forgot password? Click here to reset