Certifying Fairness of Probabilistic Circuits

12/05/2022
by   Nikil Roashan Selvam, et al.
0

With the increased use of machine learning systems for decision making, questions about the fairness properties of such systems start to take center stage. Most existing work on algorithmic fairness assume complete observation of features at prediction time, as is the case for popular notions like statistical parity and equal opportunity. However, this is not sufficient for models that can make predictions with partial observation as we could miss patterns of bias and incorrectly certify a model to be fair. To address this, a recently introduced notion of fairness asks whether the model exhibits any discrimination pattern, in which an individual characterized by (partial) feature observations, receives vastly different decisions merely by disclosing one or more sensitive attributes such as gender and race. By explicitly accounting for partial observations, this provides a much more fine-grained notion of fairness. In this paper, we propose an algorithm to search for discrimination patterns in a general class of probabilistic models, namely probabilistic circuits. Previously, such algorithms were limited to naive Bayes classifiers which make strong independence assumptions; by contrast, probabilistic circuits provide a unifying framework for a wide range of tractable probabilistic models and can even be compiled from certain classes of Bayesian networks and probabilistic programs, making our method much more broadly applicable. Furthermore, for an unfair model, it may be useful to quickly find discrimination patterns and distill them for better interpretability. As such, we also propose a sampling-based approach to more efficiently mine discrimination patterns, and introduce new classes of patterns such as minimal, maximal, and Pareto optimal patterns that can effectively summarize exponentially many discrimination patterns

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/10/2019

Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns

As machine learning is increasingly used to make real-world decisions, r...
research
02/15/2022

Choosing an algorithmic fairness metric for an online marketplace: Detecting and quantifying algorithmic bias on LinkedIn

In this paper, we derive an algorithmic fairness metric for the recommen...
research
11/12/2018

Eliminating Latent Discrimination: Train Then Mask

How can we control for latent discrimination in predictive models? How c...
research
01/04/2022

Parity-based Cumulative Fairness-aware Boosting

Data-driven AI systems can lead to discrimination on the basis of protec...
research
02/21/2020

Learning Fairness-aware Relational Structures

The development of fair machine learning models that effectively avert b...
research
10/15/2020

Online Decision Trees with Fairness

While artificial intelligence (AI)-based decision-making systems are inc...
research
11/09/2022

Discrimination and Class Imbalance Aware Online Naive Bayes

Fairness-aware mining of massive data streams is a growing and challengi...

Please sign up or login with your details

Forgot password? Click here to reset