Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based Vertical Federated Learning

10/13/2021
by   Jinyin Chen, et al.
0

Graph neural network (GNN) models have achieved great success on graph representation learning. Challenged by large scale private data collection from user-side, GNN models may not be able to reflect the excellent performance, without rich features and complete adjacent relationships. Addressing to the problem, vertical federated learning (VFL) is proposed to implement local data protection through training a global model collaboratively. Consequently, for graph-structured data, it is natural idea to construct VFL framework with GNN models. However, GNN models are proven to be vulnerable to adversarial attacks. Whether the vulnerability will be brought into the VFL has not been studied. In this paper, we devote to study the security issues of GNN based VFL (GVFL), i.e., robustness against adversarial attacks. Further, we propose an adversarial attack method, named Graph-Fraudster. It generates adversarial perturbations based on the noise-added global node embeddings via GVFL's privacy leakage, and the gradient of pairwise node. First, it steals the global node embeddings and sets up a shadow server model for attack generator. Second, noises are added into node embeddings to confuse the shadow server model. At last, the gradient of pairwise node is used to generate attacks with the guidance of noise-added node embeddings. To the best of our knowledge, this is the first study of adversarial attacks on GVFL. The extensive experiments on five benchmark datasets demonstrate that Graph-Fraudster performs better than three possible baselines in GVFL. Furthermore, Graph-Fraudster can remain a threat to GVFL even if two possible defense mechanisms are applied. This paper reveals that GVFL is vulnerable to adversarial attack similar to centralized GNN models.

READ FULL TEXT

page 1

page 8

page 9

page 10

research
05/09/2019

Adversarial Defense Framework for Graph Neural Network

Graph neural network (GNN), as a powerful representation learning model ...
research
05/20/2020

Graph Structure Learning for Robust Graph Neural Networks

Graph Neural Networks (GNNs) are powerful tools in representation learni...
research
08/03/2022

Robust Graph Neural Networks using Weighted Graph Laplacian

Graph neural network (GNN) is achieving remarkable performances in a var...
research
04/20/2022

GUARD: Graph Universal Adversarial Defense

Recently, graph convolutional networks (GCNs) have shown to be vulnerabl...
research
02/24/2023

HyperAttack: Multi-Gradient-Guided White-box Adversarial Structure Attack of Hypergraph Neural Networks

Hypergraph neural networks (HGNN) have shown superior performance in var...
research
09/04/2018

Adversarial Attacks on Node Embeddings

The goal of network representation learning is to learn low-dimensional ...
research
07/14/2023

On the Sensitivity of Deep Load Disaggregation to Adversarial Attacks

Non-intrusive Load Monitoring (NILM) algorithms, commonly referred to as...

Please sign up or login with your details

Forgot password? Click here to reset