It's FLAN time! Summing feature-wise latent representations for interpretability

06/18/2021
by   An-phi Nguyen, et al.
3

Interpretability has become a necessary feature for machine learning models deployed in critical scenarios, e.g. legal systems, healthcare. In these situations, algorithmic decisions may have (potentially negative) long-lasting effects on the end-user affected by the decision. In many cases, the representational power of deep learning models is not needed, therefore simple and interpretable models (e.g. linear models) should be preferred. However, in high-dimensional and/or complex domains (e.g. computer vision), the universal approximation capabilities of neural networks is required. Inspired by linear models and the Kolmogorov-Arnol representation theorem, we propose a novel class of structurally-constrained neural networks, which we call FLANs (Feature-wise Latent Additive Networks). Crucially, FLANs process each input feature separately, computing for each of them a representation in a common latent space. These feature-wise latent representations are then simply summed, and the aggregated representation is used for prediction. These constraints (which are at the core of the interpretability of linear models) allow an user to estimate the effect of each individual feature independently from the others, enhancing interpretability. In a set of experiments across different domains, we show how without compromising excessively the test performance, the structural constraints proposed in FLANs indeed increase the interpretability of deep learning models.

READ FULL TEXT

page 7

page 8

page 9

page 20

page 21

page 22

page 23

research
04/08/2019

Quantifying Interpretability of Arbitrary Machine Learning Models Through Functional Decomposition

To obtain interpretable machine learning models, either interpretable mo...
research
05/19/2018

Reconciled Polynomial Machine: A Unified Representation of Shallow and Deep Learning Models

In this paper, we aim at introducing a new machine learning model, namel...
research
03/01/2021

Interpretable Artificial Intelligence through the Lens of Feature Interaction

Interpretation of deep learning models is a very challenging problem bec...
research
10/31/2016

DPPred: An Effective Prediction Framework with Concise Discriminative Patterns

In the literature, two series of models have been proposed to address pr...
research
11/07/2017

Distributed Bayesian Piecewise Sparse Linear Models

The importance of interpretability of machine learning models has been i...
research
11/01/2020

DebiNet: Debiasing Linear Models with Nonlinear Overparameterized Neural Networks

Recent years have witnessed strong empirical performance of over-paramet...
research
05/19/2023

Self-Reinforcement Attention Mechanism For Tabular Learning

Apart from the high accuracy of machine learning models, what interests ...

Please sign up or login with your details

Forgot password? Click here to reset