Neural Generators of Sparse Local Linear Models for Achieving both Accuracy and Interpretability

03/13/2020
by   Yuya Yoshikawa, et al.
0

For reliability, it is important that the predictions made by machine learning methods are interpretable by human. In general, deep neural networks (DNNs) can provide accurate predictions, although it is difficult to interpret why such predictions are obtained by DNNs. On the other hand, interpretation of linear models is easy, although their predictive performance would be low since real-world data is often intrinsically non-linear. To combine both the benefits of the high predictive performance of DNNs and high interpretability of linear models into a single model, we propose neural generators of sparse local linear models (NGSLLs). The sparse local linear models have high flexibility as they can approximate non-linear functions. The NGSLL generates sparse linear weights for each sample using DNNs that take original representations of each sample (e.g., word sequence) and their simplified representations (e.g., bag-of-words) as input. By extracting features from the original representations, the weights can contain rich information to achieve high predictive performance. Additionally, the prediction is interpretable because it is obtained by the inner product between the simplified representations and the sparse weights, where only a small number of weights are selected by our gate module in the NGSLL. In experiments with real-world datasets, we demonstrate the effectiveness of the NGSLL quantitatively and qualitatively by evaluating prediction performance and visualizing generated weights on image and text classification tasks.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

page 6

page 9

page 10

research
06/05/2022

Interpretable Mixture of Experts for Structured Data

With the growth of machine learning for structured data, the need for re...
research
02/02/2023

The Contextual Lasso: Sparse Linear Models via Deep Neural Networks

Sparse linear models are a gold standard tool for interpretable machine ...
research
11/07/2017

Distributed Bayesian Piecewise Sparse Linear Models

The importance of interpretability of machine learning models has been i...
research
06/11/2021

Locally Sparse Networks for Interpretable Predictions

Despite the enormous success of neural networks, they are still hard to ...
research
07/03/2020

Gaussian Process Regression with Local Explanation

Gaussian process regression (GPR) is a fundamental model used in machine...
research
07/05/2021

ARM-Net: Adaptive Relation Modeling Network for Structured Data

Relational databases are the de facto standard for storing and querying ...
research
04/04/2022

Using Explainable Boosting Machine to Compare Idiographic and Nomothetic Approaches for Ecological Momentary Assessment Data

Previous research on EMA data of mental disorders was mainly focused on ...

Please sign up or login with your details

Forgot password? Click here to reset