CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph

09/12/2021
by   Xugang Wu, et al.
0

Graph neural networks exhibit remarkable performance in graph data analysis. However, the robustness of GNN models remains a challenge. As a result, they are not reliable enough to be deployed in critical applications. Recent studies demonstrate that GNNs could be easily fooled with adversarial perturbations, especially structural perturbations. Such vulnerability is attributed to the excessive dependence on the structure information to make predictions. To achieve better robustness, it is desirable to build the prediction of GNNs with more comprehensive features. Graph data, in most cases, has two views of information, namely structure information and feature information. In this paper, we propose CoG, a simple yet effective co-training framework to combine these two views for the purpose of robustness. CoG trains sub-models from the feature view and the structure view independently and allows them to distill knowledge from each other by adding their most confident unlabeled data into the training set. The orthogonality of these two views diversifies the sub-models, thus enhancing the robustness of their ensemble. We evaluate our framework on three popular datasets, and results show that CoG significantly improves the robustness of graph models against adversarial attacks without sacrificing their performance on clean data. We also show that CoG still achieves good robustness when both node features and graph structures are perturbed.

READ FULL TEXT

page 13

page 19

research
11/20/2022

Spectral Adversarial Training for Robust Graph Neural Network

Recent studies demonstrate that Graph Neural Networks (GNNs) are vulnera...
research
05/20/2020

Graph Structure Learning for Robust Graph Neural Networks

Graph Neural Networks (GNNs) are powerful tools in representation learni...
research
06/16/2020

DefenseVGAE: Defending against Adversarial Attacks on Graph Data via a Variational Graph Autoencoder

Graph neural networks (GNNs) achieve remarkable performance for tasks on...
research
06/24/2023

Similarity Preserving Adversarial Graph Contrastive Learning

Recent works demonstrate that GNN models are vulnerable to adversarial a...
research
06/14/2021

Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs

Recent studies have exposed that many graph neural networks (GNNs) are s...
research
10/25/2022

FocusedCleaner: Sanitizing Poisoned Graphs for Robust GNN-based Node Classification

Recently, a lot of research attention has been devoted to exploring Web ...
research
01/14/2022

Compact Graph Structure Learning via Mutual Information Compression

Graph Structure Learning (GSL) recently has attracted considerable atten...

Please sign up or login with your details

Forgot password? Click here to reset