Aequitas: A Bias and Fairness Audit Toolkit

11/14/2018
by   Pedro Saleiro, et al.
0

Recent work has raised concerns on the risk of unintended bias in algorithmic decision making systems being used nowadays that can affect individuals unfairly based on race, gender or religion, among other possible characteristics. While a lot of bias metrics and fairness definitions have been proposed in recent years, there is no consensus on which metric/definition should be used and there are very few available resources to operationalize them. Therefore, despite recent awareness, auditing for bias and fairness when developing and deploying algorithmic decision making systems is not yet a standard practice. We present Aequitas, an open source bias and fairness audit toolkit that is an intuitive and easy to use addition to the machine learning workflow, enabling users to seamlessly test models for several bias and fairness metrics in relation to multiple population sub-groups. We believe Aequitas will facilitate informed and equitable decisions around developing and deploying algorithmic decision making systems for both data scientists, machine learning researchers and policymakers.

READ FULL TEXT
research
09/09/2021

A Systematic Approach to Group Fairness in Automated Decision Making

While the field of algorithmic fairness has brought forth many ways to m...
research
10/03/2018

AI Fairness 360: An Extensible Toolkit for Detecting, Understanding, and Mitigating Unwanted Algorithmic Bias

Fairness is an increasingly important concern as machine learning models...
research
09/24/2018

Evaluating Fairness Metrics in the Presence of Dataset Bias

Data-driven algorithms play a large role in decision making across a var...
research
12/03/2017

Always Lurking: Understanding and Mitigating Bias in Online Human Trafficking Detection

Web-based human trafficking activity has increased in recent years but i...
research
05/02/2022

A Novel Approach to Fairness in Automated Decision-Making using Affective Normalization

Any decision, such as one about who to hire, involves two components. Fi...
research
10/03/2022

An intersectional framework for counterfactual fairness in risk prediction

Along with the increasing availability of data in many sectors has come ...
research
03/10/2022

Assessing Phenotype Definitions for Algorithmic Fairness

Disease identification is a core, routine activity in observational heal...

Please sign up or login with your details

Forgot password? Click here to reset