Improving Robustness of Graph Neural Networks with Heterophily-Inspired Designs

06/14/2021
by   Jiong Zhu, et al.
0

Recent studies have exposed that many graph neural networks (GNNs) are sensitive to adversarial attacks, and can suffer from performance loss if the graph structure is intentionally perturbed. A different line of research has shown that many GNN architectures implicitly assume that the underlying graph displays homophily, i.e., connected nodes are more likely to have similar features and class labels, and perform poorly if this assumption is not fulfilled. In this work, we formalize the relation between these two seemingly different issues. We theoretically show that in the standard scenario in which node features exhibit homophily, impactful structural attacks always lead to increased levels of heterophily. Then, inspired by GNN architectures that target heterophily, we present two designs – (i) separate aggregators for ego- and neighbor-embeddings, and (ii) a reduced scope of aggregation – that can significantly improve the robustness of GNNs. Our extensive empirical evaluations show that GNNs featuring merely these two designs can achieve significantly improved robustness compared to the best-performing unvaccinated model with 24.99 having smaller computational overhead than existing defense mechanisms. Furthermore, these designs can be readily combined with explicit defense mechanisms to yield state-of-the-art robustness with up to 18.33 performance under attacks compared to the best-performing vaccinated model.

READ FULL TEXT

page 1

page 2

page 3

page 4

01/30/2022

GARNET: Reduced-Rank Topology Learning for Robust and Scalable Graph Neural Networks

Graph neural networks (GNNs) have been increasingly deployed in various ...
06/15/2020

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Deep learning methods for graphs achieve remarkable performance on many ...
11/17/2020

Design Space for Graph Neural Networks

The rapid evolution of Graph Neural Networks (GNNs) has led to a growing...
10/26/2021

Robustness of Graph Neural Networks at Scale

Graph Neural Networks (GNNs) are increasingly important given their popu...
09/12/2021

CoG: a Two-View Co-training Framework for Defending Adversarial Attacks on Graph

Graph neural networks exhibit remarkable performance in graph data analy...
03/23/2021

Spatio-Temporal Sparsification for General Robust Graph Convolution Networks

Graph Neural Networks (GNNs) have attracted increasing attention due to ...
05/27/2022

EvenNet: Ignoring Odd-Hop Neighbors Improves Robustness of Graph Neural Networks

Graph Neural Networks (GNNs) have received extensive research attention ...