FAIROD: Fairness-aware Outlier Detection

12/05/2020
by   Shubhranshu Shekhar, et al.
0

Fairness and Outlier Detection (OD) are closely related, as it is exactly the goal of OD to spot rare, minority samples in a given population. When being a minority (as defined by protected variables, e.g. race/ethnicity/sex/age) does not reflect positive-class membership (e.g. criminal/fraud), however, OD produces unjust outcomes. Surprisingly, fairness-aware OD has been almost untouched in prior work, as fair machine learning literature mainly focus on supervised settings. Our work aims to bridge this gap. Specifically, we develop desiderata capturing well-motivated fairness criteria for OD, and systematically formalize the fair OD problem. Further, guided by our desiderata, we propose FairOD, a fairness-aware outlier detector, which has the following, desirable properties: FairOD (1) does not employ disparate treatment at test time, (2) aims to flag equal proportions of samples from all groups (i.e. obtain group fairness, via statistical parity), and (3) strives to flag truly high-risk fraction of samples within each group. Extensive experiments on a diverse set of synthetic and real world datasets show that FairOD produces outcomes that are fair with respect to protected variables, while performing comparable to (and in some cases, even better than) fairness-agnostic detectors in terms of detection performance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/20/2020

Fair Outlier Detection

An outlier detection method may be considered fair over specified sensit...
research
09/20/2022

Towards Auditing Unsupervised Learning Algorithms and Human Processes For Fairness

Existing work on fairness typically focuses on making known machine lear...
research
07/31/2018

The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning

In one broad class of supervised machine learning problems, researchers ...
research
12/06/2018

Differentially Private Fair Learning

We design two learning algorithms that simultaneously promise differenti...
research
08/16/2022

Error Parity Fairness: Testing for Group Fairness in Regression Tasks

The applications of Artificial Intelligence (AI) surround decisions on i...
research
10/10/2022

fAux: Testing Individual Fairness via Gradient Alignment

Machine learning models are vulnerable to biases that result in unfair t...
research
08/17/2023

A Framework for Designing Fair Ubiquitous Computing Systems

Over the past few decades, ubiquitous sensors and systems have been an i...

Please sign up or login with your details

Forgot password? Click here to reset