Provably Fair Federated Learning via Bounded Group Loss

03/18/2022
by   Shengyuan Hu, et al.
0

In federated learning, fair prediction across various protected groups (e.g., gender, race) is an important constraint for many applications. Unfortunately, prior work studying group fair federated learning lacks formal convergence or fairness guarantees. Our work provides a new definition for group fairness in federated learning based on the notion of Bounded Group Loss (BGL), which can be easily applied to common federated learning objectives. Based on our definition, we propose a scalable algorithm that optimizes the empirical risk and global fairness constraints, which we evaluate across common fairness and federated learning benchmarks. Our resulting method and analysis are the first we are aware of to provide formal theoretical guarantees for training a fair federated learning model.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/05/2021

Federating for Learning Group Fair Models

Federated learning is an increasingly popular paradigm that enables a la...
research
10/02/2021

FairFed: Enabling Group Fairness in Federated Learning

As machine learning becomes increasingly incorporated in crucial decisio...
research
05/25/2019

Fair Resource Allocation in Federated Learning

Federated learning involves training statistical models in massive, hete...
research
01/20/2022

Minimax Demographic Group Fairness in Federated Learning

Federated learning is an increasingly popular paradigm that enables a la...
research
09/05/2023

Bias Propagation in Federated Learning

We show that participating in federated learning can be detrimental to g...
research
10/29/2021

Improving Fairness via Federated Learning

Recently, lots of algorithms have been proposed for learning a fair clas...
research
09/06/2021

Fair Federated Learning for Heterogeneous Face Data

We consider the problem of achieving fair classification in Federated Le...

Please sign up or login with your details

Forgot password? Click here to reset