GUARD: Graph Universal Adversarial Defense

04/20/2022
by   Jintang Li, et al.
0

Recently, graph convolutional networks (GCNs) have shown to be vulnerable to small adversarial perturbations, which becomes a severe threat and largely limits their applications in security-critical scenarios. To mitigate such a threat, considerable research efforts have been devoted to increasing the robustness of GCNs against adversarial attacks. However, current approaches for defense are typically designed for the whole graph and consider the global performance, posing challenges in protecting important local nodes from stronger adversarial targeted attacks. In this work, we present a simple yet effective method, named Graph Universal AdveRsarial Defense (GUARD). Unlike previous works, GUARD protects each individual node from attacks with a universal defensive patch, which is generated once and can be applied to any node (node-agnostic) in a graph. Extensive experiments on four benchmark datasets demonstrate that our method significantly improves robustness for several established GCNs against multiple adversarial attacks and outperforms existing adversarial defense methods by large margins. Our code is publicly available at https://github.com/EdisonLeeeee/GUARD.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/16/2020

DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder

Graph neural networks (GNNs) achieve remarkable performance for tasks on...
research
07/16/2021

EGC2: Enhanced Graph Classification with Easy Graph Compression

Graph classification plays a significant role in network analysis. It al...
research
05/31/2021

Adaptive Feature Alignment for Adversarial Training

Recent studies reveal that Convolutional Neural Networks (CNNs) are typi...
research
02/19/2020

Indirect Adversarial Attacks via Poisoning Neighbors for Graph Convolutional Networks

Graph convolutional neural networks, which learn aggregations over neigh...
research
10/13/2021

Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning

Graph neural network (GNN) models have achieved great success on graph r...
research
11/08/2021

Graph Robustness Benchmark: Benchmarking the Adversarial Robustness of Graph Machine Learning

Adversarial attacks on graphs have posed a major threat to the robustnes...
research
03/16/2022

Provable Adversarial Robustness for Fractional Lp Threat Models

In recent years, researchers have extensively studied adversarial robust...

Please sign up or login with your details

Forgot password? Click here to reset