How unfair is private learning ?

06/08/2022
by   Amartya Sanyal, et al.
0

As machine learning algorithms are deployed on sensitive data in critical decision making processes, it is becoming increasingly important that they are also private and fair. In this paper, we show that, when the data has a long-tailed structure, it is not possible to build accurate learning algorithms that are both private and results in higher accuracy on minority subpopulations. We further show that relaxing overall accuracy can lead to good fairness even with strict privacy requirements. To corroborate our theoretical results in practice, we provide an extensive set of experimental results using a variety of synthetic, vision (and CelebA), and tabular (Law School) datasets and learning algorithms.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/29/2021

Fairness-Driven Private Collaborative Machine Learning

The performance of machine learning algorithms can be considerably impro...
research
08/03/2023

Experimental Results regarding multiple Machine Learning via Quaternions

This paper presents an experimental study on the application of quaterni...
research
05/29/2019

Fair Decision Making using Privacy-Protected Data

Data collected about individuals is regularly used to make decisions tha...
research
07/18/2022

On Fair Classification with Mostly Private Sensitive Attributes

Machine learning models have demonstrated promising performance in many ...
research
06/11/2020

A Variational Approach to Privacy and Fairness

In this article, we propose a new variational approach to learn private ...
research
01/02/2022

Fair Data Representation for Machine Learning at the Pareto Frontier

As machine learning powered decision making is playing an increasingly i...
research
05/23/2017

Learning to Succeed while Teaching to Fail: Privacy in Closed Machine Learning Systems

Security, privacy, and fairness have become critical in the era of data ...

Please sign up or login with your details

Forgot password? Click here to reset