Machines Explaining Linear Programs

06/14/2022
by   David Steinmann, et al.
10

There has been a recent push in making machine learning models more interpretable so that their performance can be trusted. Although successful, these methods have mostly focused on the deep learning methods while the fundamental optimization methods in machine learning such as linear programs (LP) have been left out. Even if LPs can be considered as whitebox or clearbox models, they are not easy to understand in terms of relationships between inputs and outputs. As a linear program only provides the optimal solution to an optimization problem, further explanations are often helpful. In this work, we extend the attribution methods for explaining neural networks to linear programs. These methods explain the model by providing relevance scores for the model inputs, to show the influence of each input on the output. Alongside using classical gradient-based attribution methods we also propose a way to adapt perturbation-based attribution methods to LPs. Our evaluations of several different linear and integer problems showed that attribution methods can generate useful explanations for linear programs. However, we also demonstrate that using a neural attribution method directly might come with some drawbacks, as the properties of these methods on neural networks do not necessarily transfer to linear programs. The methods can also struggle if a linear program has more than one optimal solution, as a solver just returns one possible solution. Our results can hopefully be used as a good starting point for further research in this direction.

READ FULL TEXT

page 7

page 16

research
04/13/2022

Baseline Computation for Attribution Methods Based on Interpolated Inputs

We discuss a way to find a well behaved baseline for attribution methods...
research
07/12/2018

Maximizing Invariant Data Perturbation with Stochastic Optimization

Feature attribution methods, or saliency maps, are one of the most popul...
research
03/22/2021

Interpreting Deep Learning Models with Marginal Attribution by Conditioning on Quantiles

A vastly growing literature on explaining deep learning models has emerg...
research
06/14/2022

Attributions Beyond Neural Networks: The Linear Program Case

Linear Programs (LPs) have been one of the building blocks in machine le...
research
10/15/2021

Interpretable Neural Networks with Frank-Wolfe: Sparse Relevance Maps and Relevance Orderings

We study the effects of constrained optimization formulations and Frank-...
research
05/31/2022

Attribution-based Explanations that Provide Recourse Cannot be Robust

Different users of machine learning methods require different explanatio...
research
03/26/2019

Explaining Deep Neural Networks with a Polynomial Time Algorithm for Shapley Values Approximation

The problem of explaining the behavior of deep neural networks has gaine...

Please sign up or login with your details

Forgot password? Click here to reset