A Survey on Neural Network Interpretability

12/28/2020
by   Yu Zhang, et al.
0

Along with the great success of deep neural networks, there is also growing concern about their black-box nature. The interpretability issue affects people's trust on deep learning systems. It is also related to many ethical problems, e.g., algorithmic discrimination. Moreover, interpretability is a desired property for deep networks to become powerful tools in other research fields, e.g., drug discovery and genomics. In this survey, we conduct a comprehensive review of the neural network interpretability research. We first clarify the definition of interpretability as it has been used in many different contexts. Then we elaborate on the importance of interpretability and propose a novel taxonomy organized along three dimensions: type of engagement (passive vs. active interpretation approaches), the type of explanation, and the focus (from local to global interpretability). This taxonomy provides a meaningful 3D view of distribution of papers from the relevant literature as two of the dimensions are not simply categorical but allow ordinal subcategories. Finally, we summarize the existing interpretability evaluation methods and suggest possible research directions inspired by our new taxonomy.

READ FULL TEXT
research
03/19/2021

Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond

Deep neural networks have been well-known for their superb performance i...
research
02/02/2018

Visual Interpretability for Deep Learning: a Survey

This paper reviews recent studies in emerging directions of understandin...
research
09/07/2022

A Survey of Neural Trees

Neural networks (NNs) and decision trees (DTs) are both popular models o...
research
01/08/2020

On Interpretability of Artificial Neural Networks

Deep learning has achieved great successes in many important areas to de...
research
04/07/2020

Towards Faithfully Interpretable NLP Systems: How should we define and evaluate faithfulness?

With the growing popularity of deep-learning based NLP models, comes a n...
research
01/21/2020

Implementations in Machine Ethics: A Survey

Increasingly complex and autonomous systems require machine ethics to ma...
research
09/25/2019

Switched linear projections and inactive state sensitivity for deep neural network interpretability

We introduce switched linear projections for expressing the activity of ...

Please sign up or login with your details

Forgot password? Click here to reset