Interpretability with full complexity by constraining feature information

11/30/2022
by   Kieran A. Murphy, et al.
0

Interpretability is a pressing issue for machine learning. Common approaches to interpretable machine learning constrain interactions between features of the input, rendering the effects of those features on a model's output comprehensible but at the expense of model complexity. We approach interpretability from a new angle: constrain the information about the features without restricting the complexity of the model. Borrowing from information theory, we use the Distributed Information Bottleneck to find optimal compressions of each feature that maximally preserve information about the output. The learned information allocation, by feature and by feature value, provides rich opportunities for interpretation, particularly in problems with many features and complex feature interactions. The central object of analysis is not a single trained model, but rather a spectrum of models serving as approximations that leverage variable amounts of information about the inputs. Information is allocated to features by their relevance to the output, thereby solving the problem of feature selection by constructing a learned continuum of feature inclusion-to-exclusion. The optimal compression of each feature – at every stage of approximation – allows fine-grained inspection of the distinctions among feature values that are most impactful for prediction. We develop a framework for extracting insight from the spectrum of approximate models and demonstrate its utility on a range of tabular datasets.

READ FULL TEXT

page 9

page 14

page 16

page 18

page 19

research
07/17/2023

Cross Feature Selection to Eliminate Spurious Interactions and Single Feature Dominance Explainable Boosting Machines

Interpretability is a crucial aspect of machine learning models that ena...
research
04/15/2022

The Distributed Information Bottleneck reveals the explanatory structure of complex systems

The fruits of science are relationships made comprehensible, often by wa...
research
10/25/2022

Characterizing information loss in a chaotic double pendulum with the Information Bottleneck

A hallmark of chaotic dynamics is the loss of information with time. Alt...
research
06/08/2015

Interpretable Selection and Visualization of Features and Interactions Using Bayesian Forests

It is becoming increasingly important for machine learning methods to ma...
research
09/30/2019

MonoNet: Towards Interpretable Models by Learning Monotonic Features

Being able to interpret, or explain, the predictions made by a machine l...
research
03/07/2017

Regularising Non-linear Models Using Feature Side-information

Very often features come with their own vectorial descriptions which pro...
research
06/30/2018

Game-Theoretic Interpretability for Temporal Modeling

Interpretability has arisen as a key desideratum of machine learning mod...

Please sign up or login with your details

Forgot password? Click here to reset