FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices

09/08/2022
by   Minxue Tang, et al.
0

Adversarial Training (AT) has been proven to be an effective method of introducing strong adversarial robustness into deep neural networks. However, the high computational cost of AT prohibits the deployment of large-scale AT on resource-constrained edge devices, e.g., with limited computing power and small memory footprint, in Federated Learning (FL) applications. Very few previous studies have tried to tackle these constraints in FL at the same time. In this paper, we propose a new framework named Federated Adversarial Decoupled Learning (FADE) to enable AT on resource-constrained edge devices in FL. FADE reduces the computation and memory usage by applying Decoupled Greedy Learning (DGL) to federated adversarial training such that each client only needs to perform AT on a small module of the entire model in each communication round. In addition, we improve vanilla DGL by adding an auxiliary weight decay to alleviate objective inconsistency and achieve better performance. FADE offers a theoretical guarantee for the adversarial robustness and convergence. The experimental results also show that FADE can significantly reduce the computing resources consumed by AT while maintaining almost the same accuracy and robustness as fully joint training.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/08/2022

Centaur: Federated Learning for Constrained Edge Devices

Federated learning (FL) on deep neural networks facilitates new applicat...
research
10/11/2021

Partial Variable Training for Efficient On-Device Federated Learning

This paper aims to address the major challenges of Federated Learning (F...
research
06/18/2021

Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning

Federated learning (FL) emerges as a popular distributed learning schema...
research
11/20/2022

FedDCT: Federated Learning of Large Convolutional Neural Networks on Resource Constrained Devices using Divide and Co-Training

We introduce FedDCT, a novel distributed learning paradigm that enables ...
research
05/26/2023

Aggregating Capacity in FL through Successive Layer Training for Computationally-Constrained Devices

Federated learning (FL) is usually performed on resource-constrained edg...
research
07/28/2020

Group Knowledge Transfer: Collaborative Training of Large CNNs on the Edge

Scaling up the convolutional neural network (CNN) size (e.g., width, dep...
research
06/02/2023

Resource-Efficient Federated Hyperdimensional Computing

In conventional federated hyperdimensional computing (HDC), training lar...

Please sign up or login with your details

Forgot password? Click here to reset