FedVal: Different good or different bad in federated learning

06/06/2023
by   Viktor Valadi, et al.
0

Federated learning (FL) systems are susceptible to attacks from malicious actors who might attempt to corrupt the training model through various poisoning attacks. FL also poses new challenges in addressing group bias, such as ensuring fair performance for different demographic groups. Traditional methods used to address such biases require centralized access to the data, which FL systems do not have. In this paper, we present a novel approach FedVal for both robustness and fairness that does not require any additional information from clients that could raise privacy concerns and consequently compromise the integrity of the FL system. To this end, we propose an innovative score function based on a server-side validation method that assesses client updates and determines the optimal aggregation balance between locally-trained models. Our research shows that this approach not only provides solid protection against poisoning attacks but can also be used to reduce group bias and subsequently promote fairness while maintaining the system's capability for differential privacy. Extensive experiments on the CIFAR-10, FEMNIST, and PUMS ACSIncome datasets in different configurations demonstrate the effectiveness of our method, resulting in state-of-the-art performances. We have proven robustness in situations where 80 malicious. Additionally, we have shown a significant increase in accuracy for underrepresented labels from 32 underrepresented features from 19

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/23/2022

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning

Group fairness ensures that the outcome of machine learning (ML) based d...
research
09/08/2022

Uncovering the Connection Between Differential Privacy and Certified Robustness of Federated Learning against Poisoning Attacks

Federated learning (FL) provides an efficient paradigm to jointly train ...
research
09/01/2023

Advancing Personalized Federated Learning: Group Privacy, Fairness, and Beyond

Federated learning (FL) is a framework for training machine learning mod...
research
10/22/2021

PRECAD: Privacy-Preserving and Robust Federated Learning via Crypto-Aided Differential Privacy

Federated Learning (FL) allows multiple participating clients to train m...
research
01/24/2022

Towards Multi-Objective Statistically Fair Federated Learning

Federated Learning (FL) has emerged as a result of data ownership and pr...
research
09/06/2021

Fair Federated Learning for Heterogeneous Face Data

We consider the problem of achieving fair classification in Federated Le...
research
06/18/2021

Federated Robustness Propagation: Sharing Adversarial Robustness in Federated Learning

Federated learning (FL) emerges as a popular distributed learning schema...

Please sign up or login with your details

Forgot password? Click here to reset