DeepAI
Log In Sign Up

Adversarial Inter-Group Link Injection Degrades the Fairness of Graph Neural Networks

09/13/2022
by   Hussain Hussain, et al.
0

We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness. These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender. We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions. For example, an attacker can compromise the fairness of GNN-based node classification by injecting adversarial links between nodes belonging to opposite subgroups and opposite class labels. Our experiments on empirical datasets demonstrate that adversarial fairness attacks can significantly degrade the fairness of GNN predictions (attacks are effective) with a low perturbation rate (attacks are efficient) and without a significant drop in accuracy (attacks are deceptive). This work demonstrates the vulnerability of GNN models to adversarial fairness attacks. We hope our findings raise awareness about this issue in our community and lay a foundation for the future development of GNN models that are more robust to such attacks.

READ FULL TEXT
06/12/2021

TDGIA:Effective Injection Attacks on Graph Neural Networks

Graph Neural Networks (GNNs) have achieved promising performance in vari...
07/10/2022

On Graph Neural Network Fairness in the Presence of Heterophilous Neighborhoods

We study the task of node classification for graph neural networks (GNNs...
08/19/2021

EqGNN: Equalized Node Opportunity in Graphs

Graph neural networks (GNNs), has been widely used for supervised learni...
07/23/2021

Structack: Structure-based Adversarial Attacks on Graph Neural Networks

Recent work has shown that graph neural networks (GNNs) are vulnerable t...
09/28/2020

Graph Adversarial Networks: Protecting Information against Adversarial Attacks

We study the problem of protecting information when learning with graph ...
04/08/2021

Explainability-based Backdoor Attacks Against Graph Neural Networks

Backdoor attacks represent a serious threat to neural network models. A ...
10/24/2020

Model Extraction Attacks on Graph Neural Networks: Taxonomy and Realization

Graph neural networks (GNNs) have been widely used to analyze the graph-...