DeepAI AI Chat
Log In Sign Up

Enhancing Explainability of Neural Networks through Architecture Constraints

01/12/2019
by   Zebin Yang, et al.
0

Prediction accuracy and model explainability are the two most important objectives when developing machine learning algorithms to solve real-world problems. The neural networks are known to possess good prediction performance, but lack of sufficient model explainability. In this paper, we propose to enhance the explainability of neural networks through the following architecture constraints: a) sparse additive subnetworks; b) orthogonal projection pursuit; and c) smooth function approximation. It leads to a sparse, orthogonal and smooth explainable neural network (SOSxNN). The multiple parameters in the SOSxNN model are simultaneously estimated by a modified mini-batch gradient descent algorithm based on the backpropagation technique for calculating the derivatives and the Cayley transform for preserving the projection orthogonality. The hyperparameters controlling the sparse and smooth constraints are optimized by the grid search. Through simulation studies, we compare the SOSxNN method to several benchmark methods including least absolute shrinkage and selection operator, support vector machine, random forest, and multi-layer perceptron. It is shown that proposed model keeps the flexibility of pursuing prediction accuracy while attaining the improved interpretability, which can be therefore used as a promising surrogate model for complex model approximation. Finally, the real data example from the Lending Club is employed as a showcase of the SOSxNN application.

READ FULL TEXT

page 1

page 2

page 3

page 4

11/02/2022

A Data-driven Case-based Reasoning in Bankruptcy Prediction

There has been intensive research regarding machine learning models for ...
11/05/2015

A Sparse Linear Model and Significance Test for Individual Consumption Prediction

Accurate prediction of user consumption is a key part not only in unders...
12/18/2022

Bort: Towards Explainable Neural Networks with Bounded Orthogonal Constraint

Deep learning has revolutionized human society, yet the black-box nature...
03/16/2020

GAMI-Net: An Explainable Neural Network based on Generalized Additive Models with Structured Interactions

The lack of interpretability is an inevitable problem when using neural ...
03/11/2020

Improving the Backpropagation Algorithm with Consequentialism Weight Updates over Mini-Batches

Least mean squares (LMS) is a particular case of the backpropagation (BP...
09/13/2014

A study on effectiveness of extreme learning machine

Extreme learning machine (ELM), proposed by Huang et al., has been shown...
02/09/2017

Energy Saving Additive Neural Network

In recent years, machine learning techniques based on neural networks fo...