Fairify: Fairness Verification of Neural Networks

12/08/2022
by   Sumon Biswas, et al.
0

Fairness of machine learning (ML) software has become a major concern in the recent past. Although recent research on testing and improving fairness have demonstrated impact on real-world software, providing fairness guarantee in practice is still lacking. Certification of ML models is challenging because of the complex decision-making process of the models. In this paper, we proposed Fairify, an SMT-based approach to verify individual fairness property in neural network (NN) models. Individual fairness ensures that any two similar individuals get similar treatment irrespective of their protected attributes e.g., race, sex, age. Verifying this fairness property is hard because of the global checking and non-linear computation nodes in NN. We proposed sound approach to make individual fairness verification tractable for the developers. The key idea is that many neurons in the NN always remain inactive when a smaller part of the input domain is considered. So, Fairify leverages whitebox access to the models in production and then apply formal analysis based pruning. Our approach adopts input partitioning and then prunes the NN for each partition to provide fairness certification or counterexample. We leveraged interval arithmetic and activation heuristic of the neurons to perform the pruning as necessary. We evaluated Fairify on 25 real-world neural networks collected from four different sources, and demonstrated the effectiveness, scalability and performance over baseline and closely related work. Fairify is also configurable based on the domain and size of the NN. Our novel formulation of the problem can answer targeted verification queries with relaxations and counterexamples, which have practical implications.

READ FULL TEXT
research
05/20/2022

CertiFair: A Framework for Certified Global Fairness of Neural Networks

We consider the problem of whether a Neural Network (NN) model satisfies...
research
07/18/2021

Probabilistic Verification of Neural Networks Against Group Fairness

Fairness is crucial for neural networks which are used in applications w...
research
05/11/2022

Individual Fairness Guarantees for Neural Networks

We consider the problem of certifying the individual fairness (IF) of fe...
research
05/19/2022

What Is Fairness? Implications For FairML

A growing body of literature in fairness-aware ML (fairML) aspires to mi...
research
06/01/2022

FETA: Fairness Enforced Verifying, Training, and Predicting Algorithms for Neural Networks

Algorithmic decision making driven by neural networks has become very pr...
research
10/10/2022

fAux: Testing Individual Fairness via Gradient Alignment

Machine learning models are vulnerable to biases that result in unfair t...
research
05/31/2021

BiasRV: Uncovering Biased Sentiment Predictions at Runtime

Sentiment analysis (SA) systems, though widely applied in many domains, ...

Please sign up or login with your details

Forgot password? Click here to reset