Analyzing the Robustness of Decentralized Horizontal and Vertical Federated Learning Architectures in a Non-IID Scenario

Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of horizontal FL scenarios under different attacks. However, there is a lack of work evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Thus, this work proposes three decentralized FL architectures, one for horizontal and two for vertical scenarios, namely HoriChain, VertiChain, and VertiComb. These architectures present different neural networks and training protocols suitable for horizontal and vertical scenarios. Then, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to evaluate the performance of the three architectures. Finally, a set of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning based on image watermarks and gradient poisoning adversarial attacks. The experiments show that even though particular configurations of both attacks can destroy the classification performance of the architectures, HoriChain is the most robust one.

READ FULL TEXT

page 6

page 8

page 11

page 13

research
01/31/2022

Studying the Robustness of Anti-adversarial Federated Learning Models Detecting Cyberattacks in IoT Spectrum Sensors

Device fingerprinting combined with Machine and Deep Learning (ML/DL) re...
research
07/09/2020

Attack of the Tails: Yes, You Really Can Backdoor Federated Learning

Due to its decentralized nature, Federated Learning (FL) lends itself to...
research
07/21/2021

Defending against Reconstruction Attack in Vertical Federated Learning

Recently researchers have studied input leakage problems in Federated Le...
research
02/11/2021

Privacy-Preserving Self-Taught Federated Learning for Heterogeneous Data

Many application scenarios call for training a machine learning model am...
research
05/26/2022

PerDoor: Persistent Non-Uniform Backdoors in Federated Learning using Adversarial Perturbations

Federated Learning (FL) enables numerous participants to train deep lear...
research
03/27/2022

Adversarial Representation Sharing: A Quantitative and Secure Collaborative Learning Framework

The performance of deep learning models highly depends on the amount of ...
research
07/07/2020

Defending Against Backdoors in Federated Learning with Robust Learning Rate

Federated Learning (FL) allows a set of agents to collaboratively train ...

Please sign up or login with your details

Forgot password? Click here to reset