Bkd-FedGNN: A Benchmark for Classification Backdoor Attacks on Federated Graph Neural Network

06/17/2023
by   Fan Liu, et al.
0

Federated Graph Neural Network (FedGNN) has recently emerged as a rapidly growing research topic, as it integrates the strengths of graph neural networks and federated learning to enable advanced machine learning applications without direct access to sensitive data. Despite its advantages, the distributed nature of FedGNN introduces additional vulnerabilities, particularly backdoor attacks stemming from malicious participants. Although graph backdoor attacks have been explored, the compounded complexity introduced by the combination of GNNs and federated learning has hindered a comprehensive understanding of these attacks, as existing research lacks extensive benchmark coverage and in-depth analysis of critical factors. To address these limitations, we propose Bkd-FedGNN, a benchmark for backdoor attacks on FedGNN. Specifically, Bkd-FedGNN decomposes the graph backdoor attack into trigger generation and injection steps, and extending the attack to the node-level federated setting, resulting in a unified framework that covers both node-level and graph-level classification tasks. Moreover, we thoroughly investigate the impact of multiple critical factors in backdoor attacks on FedGNN. These factors are categorized into global-level and local-level factors, including data distribution, the number of malicious attackers, attack time, overlapping rate, trigger size, trigger type, trigger position, and poisoning rate. Finally, we conduct comprehensive evaluations on 13 benchmark datasets and 13 critical factors, comprising 1,725 experimental configurations for node-level and graph-level tasks from six domains. These experiments encompass over 8,000 individual tests, allowing us to provide a thorough evaluation and insightful observations that advance our understanding of backdoor attacks on FedGNN.The Bkd-FedGNN benchmark is publicly available at https://github.com/usail-hkust/BkdFedGCN.

READ FULL TEXT

page 7

page 16

page 17

page 18

page 19

page 20

page 21

page 22

research
06/15/2021

CRFL: Certifiably Robust Federated Learning against Backdoor Attacks

Federated Learning (FL) as a distributed learning paradigm that aggregat...
research
10/23/2022

GANI: Global Attacks on Graph Neural Networks via Imperceptible Node Injections

Graph neural networks (GNNs) have found successful applications in vario...
research
08/04/2023

BlindSage: Label Inference Attacks against Node-level Vertical Federated Graph Neural Networks

Federated learning enables collaborative training of machine learning mo...
research
06/17/2023

Federated Learning Based Distributed Localization of False Data Injection Attacks on Smart Grids

Data analysis and monitoring on smart grids are jeopardized by attacks o...
research
04/08/2021

Explainability-based Backdoor Attacks Against Graph Neural Networks

Backdoor attacks represent a serious threat to neural network models. A ...
research
11/19/2022

Let Graph be the Go Board: Gradient-free Node Injection Attack for Graph Neural Networks via Reinforcement Learning

Graph Neural Networks (GNNs) have drawn significant attentions over the ...
research
05/17/2022

CellTypeGraph: A New Geometric Computer Vision Benchmark

Classifying all cells in an organ is a relevant and difficult problem fr...

Please sign up or login with your details

Forgot password? Click here to reset