Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning

06/18/2021
by   Junyuan Hong, et al.
0

Federated learning (FL) emerges as a popular distributed learning schema that learns a model from a set of participating users without requiring raw data to be shared. One major challenge of FL comes from heterogeneity in users, which may have distributionally different (or non-iid) data and varying computation resources. Just like in centralized learning, FL users also desire model robustness against malicious attackers at test time. Whereas adversarial training (AT) provides a sound solution for centralized learning, extending its usage for FL users has imposed significant challenges, as many users may have very limited training data as well as tight computational budgets, to afford the data-hungry and costly AT. In this paper, we study a novel learning setting that propagates adversarial robustness from high-resource users that can afford AT, to those low-resource users that cannot afford it, during the FL process. We show that existing FL techniques cannot effectively propagate adversarial robustness among non-iid users, and propose a simple yet effective propagation approach that transfers robustness through carefully designed batch-normalization statistics. We demonstrate the rationality and effectiveness of our method through extensive experiments. Especially, the proposed method is shown to grant FL remarkable robustness even when only a small portion of users afford AT during learning. Codes will be published upon acceptance.

READ FULL TEXT
research
03/18/2022

Efficient Split-Mix Federated Learning for On-Demand and In-Situ Customization

Federated learning (FL) provides a distributed learning framework for mu...
research
09/08/2022

FADE: Enabling Large-Scale Federated Adversarial Training on Resource-Constrained Edge Devices

Adversarial Training (AT) has been proven to be an effective method of i...
research
06/09/2023

Is Normalization Indispensable for Multi-domain Federated Learning?

Federated learning (FL) enhances data privacy with collaborative in-situ...
research
03/12/2023

Making Batch Normalization Great in Federated Deep Learning

Batch Normalization (BN) is commonly used in modern deep neural networks...
research
05/22/2022

Test-Time Robust Personalization for Federated Learning

Federated Learning (FL) is a machine learning paradigm where many client...
research
06/06/2023

FedVal: Different good or different bad in federated learning

Federated learning (FL) systems are susceptible to attacks from maliciou...
research
03/18/2023

FedRight: An Effective Model Copyright Protection for Federated Learning

Federated learning (FL), an effective distributed machine learning frame...

Please sign up or login with your details

Forgot password? Click here to reset