DeepAI
Log In Sign Up

fairlib: A Unified Framework for Assessing and Improving Classification Fairness

05/04/2022
by   Xudong Han, et al.
0

This paper presents fairlib, an open-source framework for assessing and improving classification fairness. It provides a systematic framework for quickly reproducing existing baseline models, developing new methods, evaluating models with different metrics, and visualizing their results. Its modularity and extensibility enable the framework to be used for diverse types of inputs, including natural language, images, and audio. In detail, we implement 14 debiasing methods, including pre-processing, at-training-time, and post-processing approaches. The built-in metrics cover the most commonly used fairness criterion and can be further generalized and customized for fairness evaluation.

READ FULL TEXT

page 1

page 2

page 3

page 4

10/04/2020

Fairness in Machine Learning: A Survey

As Machine Learning technologies become increasingly used in contexts th...
02/03/2021

Impact of Data Processing on Fairness in Supervised Learning

We study the impact of pre and post processing for reducing discriminati...
06/22/2022

FairGrad: Fairness Aware Gradient Descent

We tackle the problem of group fairness in classification, where the obj...
11/04/2020

Debiasing classifiers: is reality at variance with expectation?

Many methods for debiasing classifiers have been proposed, but their eff...
03/25/2022

On the Intrinsic and Extrinsic Fairness Evaluation Metrics for Contextualized Language Representations

Multiple metrics have been introduced to measure fairness in various nat...
09/15/2022

Adaptive Fairness Improvement Based on Causality Analysis

Given a discriminating neural network, the problem of fairness improveme...
08/20/2017

An evaluation of intrusive instrumental intelligibility metrics

Instrumental intelligibility metrics are commonly used as an alternative...