OmniFair: A Declarative System for Model-Agnostic Group Fairness in Machine Learning

03/13/2021
by   Hantian Zhang, et al.
0

Machine learning (ML) is increasingly being used to make decisions in our society. ML models, however, can be unfair to certain demographic groups (e.g., African Americans or females) according to various fairness metrics. Existing techniques for producing fair ML models either are limited to the type of fairness constraints they can handle (e.g., preprocessing) or require nontrivial modifications to downstream ML training algorithms (e.g., in-processing). We propose a declarative system OmniFair for supporting group fairness in ML. OmniFair features a declarative interface for users to specify desired group fairness constraints and supports all commonly used group fairness notions, including statistical parity, equalized odds, and predictive parity. OmniFair is also model-agnostic in the sense that it does not require modifications to a chosen ML algorithm. OmniFair also supports enforcing multiple user declared fairness constraints simultaneously while most previous techniques cannot. The algorithms in OmniFair maximize model accuracy while meeting the specified fairness constraints, and their efficiency is optimized based on the theoretically provable monotonicity property regarding the trade-off between accuracy and fairness that is unique to our system. We conduct experiments on commonly used datasets that exhibit bias against minority groups in the fairness literature. We show that OmniFair is more versatile than existing algorithmic fairness approaches in terms of both supported fairness constraints and downstream ML models. OmniFair reduces the accuracy loss by up to 94.8% compared with the second best method. OmniFair also achieves similar running time to preprocessing methods, and is up to 270× faster than in-processing methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/04/2019

Group-based Fair Learning Leads to Counter-intuitive Predictions

A number of machine learning (ML) methods have been proposed recently to...
research
12/29/2021

EiFFFeL: Enforcing Fairness in Forests by Flipping Leaves

Nowadays Machine Learning (ML) techniques are extensively adopted in man...
research
06/16/2022

Active Fairness Auditing

The fast spreading adoption of machine learning (ML) by companies across...
research
12/03/2020

FairBatch: Batch Selection for Model Fairness

Training a fair machine learning model is essential to prevent demograph...
research
05/25/2019

Protecting the Protected Group: Circumventing Harmful Fairness

Machine Learning (ML) algorithms shape our lives. Banks use them to dete...
research
06/13/2021

FairCanary: Rapid Continuous Explainable Fairness

Machine Learning (ML) models are being used in all facets of today's soc...
research
02/25/2020

Teaching the Old Dog New Tricks: Supervised Learning with Constraints

Methods for taking into account external knowledge in Machine Learning m...

Please sign up or login with your details

Forgot password? Click here to reset