Automated Test Generation to Detect Individual Discrimination in AI Models

09/10/2018
by   Aniya Agarwal, et al.
0

Dependability on AI models is of utmost importance to ensure full acceptance of the AI systems. One of the key aspects of the dependable AI system is to ensure that all its decisions are fair and not biased towards any individual. In this paper, we address the problem of detecting whether a model has an individual discrimination. Such a discrimination exists when two individuals who differ only in the values of their protected attributes (such as, gender/race) while the values of their non-protected ones are exactly the same, get different decisions. Measuring individual discrimination requires an exhaustive testing, which is infeasible for a non-trivial system. In this paper, we present an automated technique to generate test inputs, which is geared towards finding individual discrimination. Our technique combines the well-known technique called symbolic execution along with the local explainability for generation of effective test cases. Our experimental results clearly demonstrate that our technique produces 3.72 times more successful test cases than the existing state-of-the-art across all our chosen benchmarks.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/17/2022

Enhanced Fairness Testing via Generating Effective Initial Individual Discriminatory Instances

Fairness testing aims at mitigating unintended discrimination in the dec...
research
05/19/2020

Combining Dynamic Symbolic Execution, Machine Learning and Search-Based Testing to Automatically Generate Test Cases for Classes

This article discusses a new technique to automatically generate test ca...
research
08/07/2019

Paired-Consistency: An Example-Based Model-Agnostic Approach to Fairness Regularization in Machine Learning

As AI systems develop in complexity it is becoming increasingly hard to ...
research
11/20/2018

State of the Art in Fair ML: From Moral Philosophy and Legislation to Fair Classifiers

Machine learning is becoming an ever present part in our lives as many d...
research
05/12/2020

Why Fairness Cannot Be Automated: Bridging the Gap Between EU Non-Discrimination Law and AI

This article identifies a critical incompatibility between European noti...
research
07/06/2022

A multi-task network approach for calculating discrimination-free insurance prices

In applications of predictive modeling, such as insurance pricing, indir...
research
07/15/2019

Audits as Evidence: Experiments, Ensembles, and Enforcement

We develop tools for utilizing correspondence experiments to detect ille...

Please sign up or login with your details

Forgot password? Click here to reset