Astraea: Grammar-based Fairness Testing

10/06/2020
by   Ezekiel Soremekun, et al.
0

Software often produces biased outputs. In particular, machine learning (ML) based software are known to produce erroneous predictions when processing discriminatory inputs. Such unfair program behavior can be caused by societal bias. In the last few years, Amazon, Microsoft and Google have provided software services that produce unfair outputs, mostly due to societal bias (e.g. gender or race). In such events, developers are saddled with the task of conducting fairness testing. Fairness testing is challenging; developers are tasked with generating discriminatory inputs that reveal and explain biases. We propose a grammar-based fairness testing approach (called ASTRAEA) which leverages context-free grammars to generate discriminatory inputs that reveal fairness violations in software systems. Using probabilistic grammars, ASTRAEA also provides fault diagnosis by isolating the cause of observed software bias. ASTRAEA's diagnoses facilitate the improvement of ML fairness. ASTRAEA was evaluated on 18 software systems that provide three major natural language processing (NLP) services. In our evaluation, ASTRAEA generated fairness violations with a rate of  18 discriminatory test cases and found over 102K fairness violations. Furthermore, ASTRAEA improves software fairness by  76

READ FULL TEXT

page 1

page 2

page 3

page 4

research
05/18/2022

Software Fairness: An Analysis and Survey

In the last decade, researchers have studied fairness as a software prop...
research
05/08/2023

Distribution-aware Fairness Test Generation

This work addresses how to validate group fairness in image recognition ...
research
07/20/2022

Fairness Testing: A Comprehensive Survey and Analysis of Trends

Software systems are vulnerable to fairness bugs and frequently exhibit ...
research
05/31/2021

BiasRV: Uncovering Biased Sentiment Predictions at Runtime

Sentiment analysis (SA) systems, though widely applied in many domains, ...
research
02/16/2023

Towards Fair Machine Learning Software: Understanding and Addressing Model Bias Through Counterfactual Thinking

The increasing use of Machine Learning (ML) software can lead to unfair ...
research
09/07/2021

Amazon SageMaker Clarify: Machine Learning Bias Detection and Explainability in the Cloud

Understanding the predictions made by machine learning (ML) models and t...
research
12/06/2021

Thinking Beyond Distributions in Testing Machine Learned Models

Testing practices within the machine learning (ML) community have center...

Please sign up or login with your details

Forgot password? Click here to reset