DeepAI AI Chat
Log In Sign Up

Revealing Unfair Models by Mining Interpretable Evidence

by   Mohit Bajaj, et al.
HUAWEI Technologies Co., Ltd.
Simon Fraser University
Tianjin University
The University of British Columbia
McMaster University

The popularity of machine learning has increased the risk of unfair models getting deployed in high-stake applications, such as justice system, drug/vaccination design, and medical diagnosis. Although there are effective methods to train fair models from scratch, how to automatically reveal and explain the unfairness of a trained model remains a challenging task. Revealing unfairness of machine learning models in interpretable fashion is a critical step towards fair and trustworthy AI. In this paper, we systematically tackle the novel task of revealing unfair models by mining interpretable evidence (RUMIE). The key idea is to find solid evidence in the form of a group of data instances discriminated most by the model. To make the evidence interpretable, we also find a set of human-understandable key attributes and decision rules that characterize the discriminated data instances and distinguish them from the other non-discriminated data. As demonstrated by extensive experiments on many real-world data sets, our method finds highly interpretable and solid evidence to effectively reveal the unfairness of trained models. Moreover, it is much more scalable than all of the baseline methods.


page 1

page 2

page 3

page 4


Pedagogical Rule Extraction for Learning Interpretable Models

Machine-learning models are ubiquitous. In some domains, for instance, i...

Learning Interpretable Models with Causal Guarantees

Machine learning has shown much promise in helping improve the quality o...

MCA-based Rule Mining Enables Interpretable Inference in Clinical Psychiatry

Development of interpretable machine learning models for clinical health...

Physics-Inspired Interpretability Of Machine Learning Models

The ability to explain decisions made by machine learning models remains...

Individual Explanations in Machine Learning Models: A Survey for Practitioners

In recent years, the use of sophisticated statistical models that influe...

IMLI: An Incremental Framework for MaxSAT-Based Learning of Interpretable Classification Rules

The wide adoption of machine learning in the critical domains such as me...

Multi-task additive models with shared transfer functions based on dictionary learning

Additive models form a widely popular class of regression models which r...