Mitigating Group Bias in Federated Learning for Heterogeneous Devices

09/13/2023
by   Khotso Selialia, et al.
0

Federated Learning is emerging as a privacy-preserving model training approach in distributed edge applications. As such, most edge deployments are heterogeneous in nature i.e., their sensing capabilities and environments vary across deployments. This edge heterogeneity violates the independence and identical distribution (IID) property of local data across clients and produces biased global models i.e. models that contribute to unfair decision-making and discrimination against a particular community or a group. Existing bias mitigation techniques only focus on bias generated from label heterogeneity in non-IID data without accounting for domain variations due to feature heterogeneity and do not address global group-fairness property. Our work proposes a group-fair FL framework that minimizes group-bias while preserving privacy and without resource utilization overhead. Our main idea is to leverage average conditional probabilities to compute a cross-domain group importance weights derived from heterogeneous training data to optimize the performance of the worst-performing group using a modified multiplicative weights update method. Additionally, we propose regularization techniques to minimize the difference between the worst and best-performing groups while making sure through our thresholding mechanism to strike a balance between bias reduction and group performance degradation. Our evaluation of human emotion recognition and image classification benchmarks assesses the fair decision-making of our framework in real-world heterogeneous settings.

READ FULL TEXT

page 3

page 7

page 8

research
05/23/2022

PrivFairFL: Privacy-Preserving Group Fairness in Federated Learning

Group fairness ensures that the outcome of machine learning (ML) based d...
research
04/09/2022

Local Learning Matters: Rethinking Data Heterogeneity in Federated Learning

Federated learning (FL) is a promising strategy for performing privacy-p...
research
05/17/2023

Mitigating Group Bias in Federated Learning: Beyond Local Fairness

The issue of group fairness in machine learning models, where certain su...
research
01/31/2022

Heterogeneous Federated Learning via Grouped Sequential-to-Parallel Training

Federated learning (FL) is a rapidly growing privacy-preserving collabor...
research
09/06/2021

Fair Federated Learning for Heterogeneous Face Data

We consider the problem of achieving fair classification in Federated Le...
research
09/11/2019

HHHFL: Hierarchical Heterogeneous Horizontal Federated Learning for Electroencephalography

Electroencephalography (EEG) classification techniques have been widely ...
research
05/13/2023

A Federated Learning-based Industrial Health Prognostics for Heterogeneous Edge Devices using Matched Feature Extraction

Data-driven industrial health prognostics require rich training data to ...

Please sign up or login with your details

Forgot password? Click here to reset