Robust Decentralized Learning for Neural Networks

09/18/2020
by   Yao Zhou, et al.
7

In decentralized learning, data is distributed among local clients which collaboratively train a shared prediction model using secure aggregation. To preserve the privacy of the clients, modern decentralized learning paradigms require each client to maintain a private local training data set and only upload their summarized model updates to the server. However, this can quickly lead to a degenerate model and collapse in performance when corrupted updates (e.g., adversarial manipulations) are aggregated at the server. In this work, we present a robust decentralized learning framework, Decent_BVA, using bias-variance based adversarial training via asymmetrical communications between each client and the server. The experiments are conducted on neural networks with cross-entropy loss. Nevertheless, the proposed framework allows the use of various classification loss functions (e.g., cross-entropy loss, mean squared error loss) where the gradients of the bias and variance are tractable to be estimated from local clients' models. In this case, any gradient-based adversarial training strategies could be used by taking the bias-variance oriented adversarial examples into consideration, e.g., bias-variance based FGSM and PGD proposed in this paper. Experiments show that Decent_BVA is robust to the classical adversarial attacks when the level of corruption is high while being competitive compared with conventional decentralized learning in terms of the model's accuracy and efficiency.

READ FULL TEXT
research
08/07/2022

Federated Adversarial Learning: A Framework with Convergence Analysis

Federated learning (FL) is a trending training paradigm to utilize decen...
research
08/01/2020

Vulnerability Under Adversarial Machine Learning: Bias or Variance?

Prior studies have unveiled the vulnerability of the deep neural network...
research
10/29/2019

Shielding Collaborative Learning: Mitigating Poisoning Attacks through Client-Side Detection

Collaborative learning allows multiple clients to train a joint model wi...
research
06/18/2022

Fully Privacy-Preserving Federated Representation Learning via Secure Embedding Aggregation

We consider a federated representation learning framework, where with th...
research
12/20/2021

Certified Federated Adversarial Training

In federated learning (FL), robust aggregation schemes have been develop...
research
06/05/2022

Federated Adversarial Training with Transformers

Federated learning (FL) has emerged to enable global model training over...

Please sign up or login with your details

Forgot password? Click here to reset