Please Stop Explaining Black Box Models for High Stakes Decisions

11/26/2018
by   Cynthia Rudin, et al.
0

There are black box models now being used for high stakes decision-making throughout society. The practice of trying to explain black box models, rather than creating models that are interpretable in the first place, is likely to perpetuate bad practices and can potentially cause catastrophic harm to society. There is a way forward -- it is to design models that are inherently interpretable.

READ FULL TEXT
research
07/05/2019

Global Aggregations of Local Explanations for Black Box models

The decision-making process of many state-of-the-art machine learning mo...
research
11/30/2020

TimeSHAP: Explaining Recurrent Models through Sequence Perturbations

Recurrent neural networks are a standard building block in numerous mach...
research
03/20/2020

Locally Interpretable Predictions of Parkinson's Disease Progression

In precision medicine, machine learning techniques have been commonly pr...
research
04/22/2021

Patch Shortcuts: Interpretable Proxy Models Efficiently Find Black-Box Vulnerabilities

An important pillar for safe machine learning (ML) is the systematic mit...
research
11/21/2022

Language in a Bottle: Language Model Guided Concept Bottlenecks for Interpretable Image Classification

Concept Bottleneck Models (CBM) are inherently interpretable models that...
research
01/03/2021

Outcome-Explorer: A Causality Guided Interactive Visual Interface for Interpretable Algorithmic Decision Making

The widespread adoption of algorithmic decision-making systems has broug...
research
06/24/2021

Promises and Pitfalls of Black-Box Concept Learning Models

Machine learning models that incorporate concept learning as an intermed...

Please sign up or login with your details

Forgot password? Click here to reset