Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure

02/20/2019
by   Fuli Feng, et al.
6

Recent efforts show that neural networks are vulnerable to small but intentional perturbations on input features in visual classification tasks. Due to the additional consideration of connections between examples (e.g., articles with citation link tend to be in the same class), graph neural networks could be more sensitive to the perturbations, since the perturbations from connected examples exacerbate the impact on a target example. Adversarial Training (AT), a dynamic regularization technique, can resist the worst-case perturbations on input features and is a promising choice to improve model robustness and generalization. However, existing AT methods focus on standard classification, being less effective when training models on graph since it does not model the impact from connected examples. In this work, we explore adversarial training on graph, aiming to improve the robustness and generalization of models learned on graph. We propose Graph Adversarial Training (GAT), which takes the impact from connected examples into account when learning to construct and resist perturbations. We give a general formulation of GAT, which can be seen as a dynamic regularization scheme based on the graph structure. To demonstrate the utility of GAT, we employ it on a state-of-the-art graph neural network model --- Graph Convolutional Network (GCN). We conduct experiments on two citation graphs (Citeseer and Cora) and a knowledge graph (NELL), verifying the effectiveness of GAT which outperforms normal training on GCN by 4.51 released upon acceptance.

READ FULL TEXT

page 3

page 4

page 5

page 6

page 7

page 9

page 10

page 11

research
11/20/2022

Spectral Adversarial Training for Robust Graph Neural Network

Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnera...
research
06/27/2023

Adversarial Training for Graph Neural Networks

Despite its success in the image domain, adversarial training does not (...
research
02/20/2020

MaxUp: A Simple Way to Improve Generalization of Neural Network Training

We propose MaxUp, an embarrassingly simple, highly effective technique f...
research
02/25/2019

Batch Virtual Adversarial Training for Graph Convolutional Networks

We present batch virtual adversarial training (BVAT), a novel regulariza...
research
10/31/2019

Certifiable Robustness to Graph Perturbations

Despite the exploding interest in graph neural networks there has been l...
research
02/22/2019

Adversarial Attacks on Graph Neural Networks via Meta Learning

Deep learning models for graphs have advanced the state of the art on ma...
research
10/21/2019

Edge Dithering for Robust Adaptive Graph Convolutional Networks

Graph convolutional networks (GCNs) are vulnerable to perturbations of t...

Please sign up or login with your details

Forgot password? Click here to reset