A Distributed Fair Machine Learning Framework with Private Demographic Data Protection

09/17/2019
by   Hui Hu, et al.
0

Fair machine learning has become a significant research topic with broad societal impact. However, most fair learning methods require direct access to personal demographic data, which is increasingly restricted to use for protecting user privacy (e.g. by the EU General Data Protection Regulation). In this paper, we propose a distributed fair learning framework for protecting the privacy of demographic data. We assume this data is privately held by a third party, which can communicate with the data center (responsible for model development) without revealing the demographic information. We propose a principled approach to design fair learning methods under this framework, exemplify four methods and show they consistently outperform their existing counterparts in both fairness and accuracy across three real-world data sets. We theoretically analyze the framework, and prove it can learn models with high fairness or high accuracy, with their trade-offs balanced by a threshold variable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/04/2019

Fair Kernel Regression via Fair Feature Embedding in Kernel Space

In recent years, there have been significant efforts on mitigating uneth...
research
01/31/2023

Superhuman Fairness

The fairness of machine learning-based decisions has become an increasin...
research
01/15/2022

Training Fair Deep Neural Networks by Balancing Influence

Most fair machine learning methods either highly rely on the sensitive i...
research
05/05/2021

When Fair Ranking Meets Uncertain Inference

Existing fair ranking systems, especially those designed to be demograph...
research
10/25/2022

I Prefer not to Say: Operationalizing Fair and User-guided Data Minimization

To grant users greater authority over their personal data, policymakers ...
research
07/27/2023

Fair Machine Unlearning: Data Removal while Mitigating Disparities

As public consciousness regarding the collection and use of personal inf...
research
02/17/2023

The Unbearable Weight of Massive Privilege: Revisiting Bias-Variance Trade-Offs in the Context of Fair Prediction

In this paper we revisit the bias-variance decomposition of model error ...

Please sign up or login with your details

Forgot password? Click here to reset