Neural Basis Models for Interpretability

05/27/2022
by   Filip Radenovic, et al.
0

Due to the widespread use of complex machine learning models in real-world applications, it is becoming critical to explain model predictions. However, these models are typically black-box deep neural networks, explained post-hoc via methods with known faithfulness limitations. Generalized Additive Models (GAMs) are an inherently interpretable class of models that address this limitation by learning a non-linear shape function for each feature separately, followed by a linear model on top. However, these models are typically difficult to train, require numerous parameters, and are difficult to scale. We propose an entirely new subfamily of GAMs that utilizes basis decomposition of shape functions. A small number of basis functions are shared among all features, and are learned jointly for a given task, thus making our model scale much better to large-scale data with high-dimensional features, especially when features are sparse. We propose an architecture denoted as the Neural Basis Model (NBM) which uses a single neural network to learn these bases. On a variety of tabular and image datasets, we demonstrate that for interpretable machine learning, NBMs are the state-of-the-art in accuracy, model size, and, throughput and can easily model all higher-order feature interactions.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
09/30/2022

Higher-order Neural Additive Models: An Interpretable Machine Learning Model with Feature Interactions

Black-box models, such as deep neural networks, exhibit superior predict...
research
01/20/2020

An interpretable neural network model through piecewise linear approximation

Most existing interpretable methods explain a black-box model in a post-...
research
05/28/2022

Additive Higher-Order Factorization Machines

In the age of big data and interpretable machine learning, approaches ne...
research
03/19/2023

Unsupervised Interpretable Basis Extraction for Concept-Based Visual Explanations

An important line of research attempts to explain CNN image classifier p...
research
05/27/2022

Scalable Interpretability via Polynomials

Generalized Additive Models (GAMs) have quickly become the leading choic...
research
11/22/2016

Feature Importance Measure for Non-linear Learning Algorithms

Complex problems may require sophisticated, non-linear learning methods ...
research
05/31/2018

Interpretable Set Functions

We propose learning flexible but interpretable functions that aggregate ...

Please sign up or login with your details

Forgot password? Click here to reset