Sampling Based On Natural Image Statistics Improves Local Surrogate Explainers

08/08/2022
by   Ricardo Kleinlein, et al.
3

Many problems in computer vision have recently been tackled using models whose predictions cannot be easily interpreted, most commonly deep neural networks. Surrogate explainers are a popular post-hoc interpretability method to further understand how a model arrives at a particular prediction. By training a simple, more interpretable model to locally approximate the decision boundary of a non-interpretable system, we can estimate the relative importance of the input features on the prediction. Focusing on images, surrogate explainers, e.g., LIME, generate a local neighbourhood around a query image by sampling in an interpretable domain. However, these interpretable domains have traditionally been derived exclusively from the intrinsic features of the query image, not taking into consideration the manifold of the data the non-interpretable model has been exposed to in training (or more generally, the manifold of real images). This leads to suboptimal surrogates trained on potentially low probability images. We address this limitation by aligning the local neighbourhood on which the surrogate is trained with the original training data distribution, even when this distribution is not accessible. We propose two approaches to do so, namely (1) altering the method for sampling the local neighbourhood and (2) using perceptual metrics to convey some of the properties of the distribution of natural images.

READ FULL TEXT

page 1

page 2

page 4

page 7

page 8

research
06/19/2018

Defining Locality for Surrogates in Post-hoc Interpretablity

Local surrogate models, to approximate the local decision boundary of a ...
research
02/22/2021

Explainers in the Wild: Making Surrogate Explainers Robust to Distortions through Perception

Explaining the decisions of models is becoming pervasive in the image pr...
research
06/10/2022

Explaining Neural Networks without Access to Training Data

We consider generating explanations for neural networks in cases where t...
research
03/02/2023

Feature Perturbation Augmentation for Reliable Evaluation of Importance Estimators

Post-hoc explanation methods attempt to make the inner workings of deep ...
research
07/28/2023

Seeking the Yield Barrier: High-Dimensional SRAM Evaluation Through Optimal Manifold

Being able to efficiently obtain an accurate estimate of the failure pro...
research
03/17/2023

Disentangling the Link Between Image Statistics and Human Perception

In the 1950s Horace Barlow and Fred Attneave suggested a connection betw...
research
12/17/2019

Improved Surrogates in Inertial Confinement Fusion with Manifold and Cycle Consistencies

Neural networks have become very popular in surrogate modeling because o...

Please sign up or login with your details

Forgot password? Click here to reset