DeepAI AI Chat
Log In Sign Up

ML Interpretability: Simple Isn't Easy

11/24/2022
by   Tim Räz, et al.
0

The interpretability of ML models is important, but it is not clear what it amounts to. So far, most philosophers have discussed the lack of interpretability of black-box models such as neural networks, and methods such as explainable AI that aim to make these models more transparent. The goal of this paper is to clarify the nature of interpretability by focussing on the other end of the 'interpretability spectrum'. The reasons why some models, linear models and decision trees, are highly interpretable will be examined, and also how more general models, MARS and GAM, retain some degree of interpretability. I find that while there is heterogeneity in how we gain interpretability, what interpretability is in particular cases can be explicated in a clear manner.

READ FULL TEXT

page 1

page 2

page 3

page 4

08/17/2022

Quality Diversity Evolutionary Learning of Decision Trees

Addressing the need for explainable Machine Learning has emerged as one ...
11/27/2018

What is Interpretable? Using Machine Learning to Design Interpretable Decision-Support Systems

Recent efforts in Machine Learning (ML) interpretability have focused on...
10/05/2021

Foundations of Symbolic Languages for Model Interpretability

Several queries and scores have recently been proposed to explain indivi...
06/02/2020

Local Interpretability of Calibrated Prediction Models: A Case of Type 2 Diabetes Mellitus Screening Test

Machine Learning (ML) models are often complex and difficult to interpre...
08/31/2021

The five Is: Key principles for interpretable and safe conversational AI

In this position paper, we present five key principles, namely interpret...
08/05/2021

Mixture of Linear Models Co-supervised by Deep Neural Networks

Deep neural network (DNN) models have achieved phenomenal success for ap...
04/30/2020

Hide-and-Seek: A Template for Explainable AI

Lack of transparency has been the Achilles heal of Neural Networks and t...