Towards Auditing Unsupervised Learning Algorithms and Human Processes For Fairness

09/20/2022
by   Ian Davidson, et al.
0

Existing work on fairness typically focuses on making known machine learning algorithms fairer. Fair variants of classification, clustering, outlier detection and other styles of algorithms exist. However, an understudied area is the topic of auditing an algorithm's output to determine fairness. Existing work has explored the two group classification problem for binary protected status variables using standard definitions of statistical parity. Here we build upon the area of auditing by exploring the multi-group setting under more complex definitions of fairness.

READ FULL TEXT
research
07/08/2020

Whither Fair Clustering?

Within the relatively busy area of fair machine learning that has been d...
research
01/05/2021

Characterizing Intersectional Group Fairness with Worst-Case Comparisons

Machine Learning or Artificial Intelligence algorithms have gained consi...
research
12/05/2020

FAIROD: Fairness-aware Outlier Detection

Fairness and Outlier Detection (OD) are closely related, as it is exactl...
research
06/22/2022

FairGrad: Fairness Aware Gradient Descent

We tackle the problem of group fairness in classification, where the obj...
research
01/25/2023

Group fairness in dynamic refugee assignment

Ensuring that refugees and asylum seekers thrive (e.g., find employment)...
research
03/23/2022

Improving the Fairness of Chest X-ray Classifiers

Deep learning models have reached or surpassed human-level performance i...
research
06/23/2021

Fairness for Image Generation with Uncertain Sensitive Attributes

This work tackles the issue of fairness in the context of generative pro...

Please sign up or login with your details

Forgot password? Click here to reset