Hierarchically Fair Federated Learning

04/22/2020
by   Jingfeng Zhang, et al.
1

Federated learning facilitates collaboration among self-interested agents. However, agents participate only if they are fairly rewarded. To encourage participation, this paper introduces a new fairness notion based on the proportionality principle, i.e., more contribution should lead to more reward. To achieve this, we propose a novel hierarchically fair federated learning (HFFL) framework. Under this framework, agents are rewarded in proportion to their contributions which properly incentivizes collaboration. HFFL+ extends this to incorporate heterogeneous models. Theoretical analysis and empirical evaluation on several datasets confirm the efficacy of our frameworks in upholding fairness.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/19/2021

Improving Fairness for Data Valuation in Federated Learning

Federated learning is an emerging decentralized machine learning scheme ...
research
03/04/2021

One for One, or All for All: Equilibria and Optimality of Collaboration in Federated Learning

In recent years, federated learning has been embraced as an approach for...
research
12/01/2021

Models of fairness in federated learning

In many real-world situations, data is distributed across multiple locat...
research
11/03/2022

Fairness in Federated Learning via Core-Stability

Federated learning provides an effective paradigm to jointly optimize a ...
research
10/29/2021

Improving Fairness via Federated Learning

Recently, lots of algorithms have been proposed for learning a fair clas...
research
07/25/2023

Scaff-PD: Communication Efficient Fair and Robust Federated Learning

We present Scaff-PD, a fast and communication-efficient algorithm for di...
research
12/08/2020

Federated Multi-Task Learning for Competing Constraints

In addition to accuracy, fairness and robustness are two critical concer...

Please sign up or login with your details

Forgot password? Click here to reset