On the Effectiveness of Interpretable Feedforward Neural Network

11/03/2021
by   Miles Q. Li, et al.
0

Deep learning models have achieved state-of-the-art performance in many classification tasks. However, most of them cannot provide an interpretation for their classification results. Machine learning models that are interpretable are usually linear or piecewise linear and yield inferior performance. Non-linear models achieve much better classification performance, but it is hard to interpret their classification results. This may have been changed by an interpretable feedforward neural network (IFFNN) proposed that achieves both high classification performance and interpretability for malware detection. If the IFFNN can perform well in a more flexible and general form for other classification tasks while providing meaningful interpretations, it may be of great interest to the applied machine learning community. In this paper, we propose a way to generalize the interpretable feedforward neural network to multi-class classification scenarios and any type of feedforward neural networks, and evaluate its classification performance and interpretability on intrinsic interpretable datasets. We conclude by finding that the generalized IFFNNs achieve comparable classification performance to their normal feedforward neural network counterparts and provide meaningful interpretations. Thus, this kind of neural network architecture has great practical use.

READ FULL TEXT
research
10/15/2022

DProtoNet: Decoupling the inference module and the explanation module enables neural networks to have better accuracy and interpretability

The interpretation of decisions made by neural networks is the focus of ...
research
04/03/2019

Interpretable Deep Learning for Two-Prong Jet Classification with Jet Spectra

Classification of jets with deep learning has gained significant attenti...
research
12/02/2020

The Self-Simplifying Machine: Exploiting the Structure of Piecewise Linear Neural Networks to Create Interpretable Models

Today, it is more important than ever before for users to have trust in ...
research
08/10/2022

E Pluribus Unum Interpretable Convolutional Neural Networks

The adoption of Convolutional Neural Network (CNN) models in high-stake ...
research
11/27/2020

Discriminatory Expressions to Produce Interpretable Models in Microblogging Context

Social Networking Sites (SNS) are one of the most important ways of comm...
research
03/16/2018

Learning Sparse Deep Feedforward Networks via Tree Skeleton Expansion

Despite the popularity of deep learning, structure learning for deep mod...
research
11/25/2021

Extending the Relative Seriality Formalism for Interpretable Deep Learning of Normal Tissue Complication Probability Models

We formally demonstrate that the relative seriality model of Kallman, et...

Please sign up or login with your details

Forgot password? Click here to reset