DeepAI AI Chat
Log In Sign Up

Revealing Unfair Models by Mining Interpretable Evidence

07/12/2022
by   Mohit Bajaj, et al.
HUAWEI Technologies Co., Ltd.
Simon Fraser University
Tianjin University
The University of British Columbia
McMaster University
10

The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications, such as justice system, drug/vaccination design, and medical diagnosis. Although there are effective methods to train fair models from scratch, how to automatically reveal and explain the unfairness of a trained model remains a challenging task. Revealing unfairness of machine learning models in interpretable fashion is a critical step towards fair and trustworthy AI. In this paper, we systematically tackle the novel task of revealing unfair models by mining interpretable evidence (RUMIE). The key idea is to find solid evidence in the form of a group of data instances discriminated most by the model. To make the evidence interpretable, we also find a set of human-understandable key attributes and decision rules that characterize the discriminated data instances and distinguish them from the other non-discriminated data. As demonstrated by extensive experiments on many real-world data sets, our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models. Moreover, it is much more scalable than all of the baseline methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

12/25/2021

Pedagogical Rule Extraction for Learning Interpretable Models

Machine-learning models are ubiquitous. In some domains, for instance, i...
01/24/2019

Learning Interpretable Models with Causal Guarantees

Machine learning has shown much promise in helping improve the quality o...
10/26/2018

MCA-based Rule Mining Enables Interpretable Inference in Clinical Psychiatry

Development of interpretable machine learning models for clinical health...
04/05/2023

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...
04/09/2021

Individual Explanations in Machine Learning Models: A Survey for Practitioners

In recent years, the use of sophisticated statistical models that influe...
01/07/2020

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

The wide adoption of machine learning in the critical domains such as me...
05/19/2015

Multi-task additive models with shared transfer functions based on dictionary learning

Additive models form a widely popular class of regression models which r...