Certifying Robustness to Programmable Data Bias in Decision Trees

10/08/2021
by   Anna P. Meyer, et al.
0

Datasets can be biased due to societal inequities, human biases, under-representation of minorities, etc. Our goal is to certify that models produced by a learning algorithm are pointwise-robust to potential dataset biases. This is a challenging problem: it entails learning models for a large, or even infinite, number of datasets, ensuring that they all produce the same prediction. We focus on decision-tree learning due to the interpretable nature of the models. Our approach allows programmatically specifying bias models across a variety of dimensions (e.g., missing data for minorities), composing types of bias, and targeting bias towards a specific group. To certify robustness, we use a novel symbolic technique to evaluate a decision-tree learner on a large, or infinite, number of datasets, certifying that each and every dataset produces the same prediction for a specific test point. We evaluate our approach on datasets that are commonly used in the fairness literature, and demonstrate our approach's viability on a range of bias models.

READ FULL TEXT
research
09/07/2023

Trinary Decision Trees for missing value handling

This paper introduces the Trinary decision tree, an algorithm designed t...
research
12/02/2019

Proving Data-Poisoning Robustness in Decision Trees

Machine learning models are brittle, and small changes in the training d...
research
06/07/2022

Certifying Data-Bias Robustness in Linear Regression

Datasets typically contain inaccuracies due to human error and societal ...
research
04/28/2022

Learning to Split for Automatic Bias Detection

Classifiers are biased when trained on biased datasets. As a remedy, we ...
research
11/03/2020

(Un)fairness in Post-operative Complication Prediction Models

With the current ongoing debate about fairness, explainability and trans...
research
06/11/2020

How Interpretable and Trustworthy are GAMs?

Generalized additive models (GAMs) have become a leading model class for...
research
05/05/2023

Mining bias-target Alignment from Voronoi Cells

Despite significant research efforts, deep neural networks are still vul...

Please sign up or login with your details

Forgot password? Click here to reset