FedCom: A Byzantine-Robust Local Model Aggregation Rule Using Data Commitment for Federated Learning

04/16/2021
by   Bo Zhao, et al.
10

Federated learning (FL) is a promising privacy-preserving distributed machine learning methodology that allows multiple clients (i.e., workers) to collaboratively train statistical models without disclosing private training data. Due to the characteristics of data remaining localized and the uninspected on-device training process, there may exist Byzantine workers launching data poisoning and model poisoning attacks, which would seriously deteriorate model performance or prevent the model from convergence. Most of the existing Byzantine-robust FL schemes are either ineffective against several advanced poisoning attacks or need to centralize a public validation dataset, which is intractable in FL. Moreover, to the best of our knowledge, none of the existing Byzantine-robust distributed learning methods could well exert its power in Non-Independent and Identically distributed (Non-IID) data among clients. To address these issues, we propose FedCom, a novel Byzantine-robust federated learning framework by incorporating the idea of commitment from cryptography, which could achieve both data poisoning and model poisoning tolerant FL under practical Non-IID data partitions. Specifically, in FedCom, each client is first required to make a commitment to its local training data distribution. Then, we identify poisoned datasets by comparing the Wasserstein distance among commitments submitted by different clients. Furthermore, we distinguish abnormal local model updates from benign ones by testing each local model's behavior on its corresponding data commitment. We conduct an extensive performance evaluation of FedCom. The results demonstrate its effectiveness and superior performance compared to the state-of-the-art Byzantine-robust schemes in defending against typical data poisoning and model poisoning attacks under practical Non-IID data distributions.

READ FULL TEXT

page 1

page 10

research
12/13/2022

AFLGuard: Byzantine-robust Asynchronous Federated Learning

Federated learning (FL) is an emerging machine learning paradigm, in whi...
research
10/29/2022

Security-Preserving Federated Learning via Byzantine-Sensitive Triplet Distance

While being an effective framework of learning a shared model across mul...
research
09/07/2023

Byzantine-Robust Federated Learning with Variance Reduction and Differential Privacy

Federated learning (FL) is designed to preserve data privacy during mode...
research
08/07/2023

A Four-Pronged Defense Against Byzantine Attacks in Federated Learning

Federated learning (FL) is a nascent distributed learning paradigm to tr...
research
12/29/2021

Challenges and approaches for mitigating byzantine attacks in federated learning

Recently emerged federated learning (FL) is an attractive distributed le...
research
03/09/2023

FedREP: A Byzantine-Robust, Communication-Efficient and Privacy-Preserving Framework for Federated Learning

Federated learning (FL) has recently become a hot research topic, in whi...
research
08/24/2023

A Huber Loss Minimization Approach to Byzantine Robust Federated Learning

Federated learning systems are susceptible to adversarial attacks. To co...

Please sign up or login with your details

Forgot password? Click here to reset