Federated Learning Meets Fairness and Differential Privacy

08/23/2021
by   Manisha Padala, et al.
1

Deep learning's unprecedented success raises several ethical concerns ranging from biased predictions to data privacy. Researchers tackle these issues by introducing fairness metrics, or federated learning, or differential privacy. A first, this work presents an ethical federated learning model, incorporating all three measures simultaneously. Experiments on the Adult, Bank and Dutch datasets highlight the resulting “empirical interplay" between accuracy, fairness, and privacy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2021

Enforcing fairness in private federated learning via the modified method of differential multipliers

Federated learning with differential privacy, or private federated learn...
research
07/24/2022

Federated Graph Contrastive Learning

Graph learning models are critical tools for researchers to explore grap...
research
07/31/2020

LDP-FL: Practical Private Aggregation in Federated Learning with Local Differential Privacy

Train machine learning models on sensitive user data has raised increasi...
research
03/25/2022

FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations

In this paper we introduce "Federated Learning Utilities and Tools for E...
research
08/25/2022

On Differential Privacy for Federated Learning in Wireless Systems with Multiple Base Stations

In this work, we consider a federated learning model in a wireless syste...
research
04/19/2020

Local Differential Privacy based Federated Learning for Internet of Things

Internet of Vehicles (IoV) is a promising branch of the Internet of Thin...
research
12/01/2021

Models of fairness in federated learning

In many real-world situations, data is distributed across multiple locat...

Please sign up or login with your details

Forgot password? Click here to reset