Interpretable Neural Networks based classifiers for categorical inputs

02/05/2021
by   Stefano Zamuner, et al.
8

Because of the pervasive usage of Neural Networks in human sensitive applications, their interpretability is becoming an increasingly important topic in machine learning. In this work we introduce a simple way to interpret the output function of a neural network classifier that take as input categorical variables. By exploiting a mapping between a neural network classifier and a physical energy model, we show that in these cases each layer of the network, and the logits layer in particular, can be expanded as a sum of terms that account for the contribution to the classification of each input pattern. For instance, at the first order, the expansion considers just the linear relation between input features and output while at the second order pairwise dependencies between input features are also accounted for. The analysis of the contributions of each pattern, after an appropriate gauge transformation, is presented in two cases where the effectiveness of the method can be appreciated.

READ FULL TEXT

page 5

page 6

page 8

page 13

page 14

page 15

page 16

page 17

research
03/23/2023

Take 5: Interpretable Image Classification with a Handful of Features

Deep Neural Networks use thousands of mostly incomprehensible features t...
research
01/24/2019

Recovering Pairwise Interactions Using Neural Networks

Recovering pairwise interactions, i.e. pairs of input features whose joi...
research
12/02/2018

On variation of gradients of deep neural networks

We provide a theoretical explanation of the role of the number of nodes ...
research
11/23/2022

Interpretability of an Interaction Network for identifying H → bb̅ jets

Multivariate techniques and machine learning models have found numerous ...
research
06/05/2018

Explainable Neural Networks based on Additive Index Models

Machine Learning algorithms are increasingly being used in recent years ...
research
09/27/2021

Optimising for Interpretability: Convolutional Dynamic Alignment Networks

We introduce a new family of neural network models called Convolutional ...
research
09/14/2020

Into the unknown: Active monitoring of neural networks

Machine-learning techniques achieve excellent performance in modern appl...

Please sign up or login with your details

Forgot password? Click here to reset