Perfectly Parallel Fairness Certification of Neural Networks

12/05/2019
by   Caterina Urban, et al.
0

Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data. The development of tools and techniques for certifying fairness of these models or describing their biased behavior is, therefore, critical. In this paper, we propose a perfectly parallel static analysis for certifying causal fairness of feed-forward neural networks used for classification tasks. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased behavior. We design the analysis to be sound, in practice also exact, and configurable in terms of scalability and precision, thereby enabling pay-as-you-go certification. We implement our approach in an open-source tool and demonstrate its effectiveness on models trained with popular datasets.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/18/2021

Probabilistic Verification of Neural Networks Against Group Fairness

Fairness is crucial for neural networks which are used in applications w...
05/24/2021

Robust Fairness-aware Learning Under Sample Selection Bias

The underlying assumption of many machine learning algorithms is that th...
07/12/2020

The Impossibility Theorem of Machine Fairness – A Causal Perspective

With the increasing pervasive use of machine learning in social and econ...
07/02/2018

Automated Directed Fairness Testing

Fairness is a critical trait in decision making. As machine-learning mod...
02/05/2021

Removing biased data to improve fairness and accuracy

Machine learning systems are often trained using data collected from his...
07/30/2020

Fairness-Aware Online Personalization

Decision making in crucial applications such as lending, hiring, and col...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.