Fairness-Aware Learning with Restriction of Universal Dependency using f-Divergences

06/25/2015
by   Kazuto Fukuchi, et al.
0

Fairness-aware learning is a novel framework for classification tasks. Like regular empirical risk minimization (ERM), it aims to learn a classifier with a low error rate, and at the same time, for the predictions of the classifier to be independent of sensitive features, such as gender, religion, race, and ethnicity. Existing methods can achieve low dependencies on given samples, but this is not guaranteed on unseen samples. The existing fairness-aware learning algorithms employ different dependency measures, and each algorithm is specifically designed for a particular one. Such diversity makes it difficult to theoretically analyze and compare them. In this paper, we propose a general framework for fairness-aware learning that uses f-divergences and that covers most of the dependency measures employed in the existing methods. We introduce a way to estimate the f-divergences that allows us to give a unified analysis for the upper bound of the estimation error; this bound is tighter than that of the existing convergence rate analysis of the divergence estimation. With our divergence estimate, we propose a fairness-aware learning algorithm, and perform a theoretical analysis of its generalization error. Our analysis reveals that, under mild assumptions and even with enforcement of fairness, the generalization error of our method is O(√(1/n)), which is the same as that of the regular ERM. In addition, and more importantly, we show that, for any f-divergence, the upper bound of the estimation error of the divergence is O(√(1/n)). This indicates that our fairness-aware learning algorithm guarantees low dependencies on unseen samples for any dependency measure represented by an f-divergence.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/19/2018

Max-Diversity Distributed Learning: Theory and Algorithms

We study the risk performance of distributed learning for the regulariza...
research
02/24/2021

FERMI: Fair Empirical Risk Minimization via Exponential Rényi Mutual Information

In this paper, we propose a new notion of fairness violation, called Exp...
research
08/25/2021

Social Norm Bias: Residual Harms of Fairness-Aware Algorithms

Many modern learning algorithms mitigate bias by enforcing fairness acro...
research
07/04/2022

How Robust is Your Fairness? Evaluating and Sustaining Fairness under Unseen Distribution Shifts

Increasing concerns have been raised on deep learning fairness in recent...
research
05/06/2022

Fast Rate Generalization Error Bounds: Variations on a Theme

A recent line of works, initiated by Russo and Xu, has shown that the ge...
research
05/25/2019

Average Individual Fairness: Algorithms, Generalization and Experiments

We propose a new family of fairness definitions for classification probl...
research
11/12/2019

Fairness-Aware Neural Réyni Minimization for Continuous Features

The past few years have seen a dramatic rise of academic and societal in...

Please sign up or login with your details

Forgot password? Click here to reset