On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box

08/18/2023
by   Yi Cai, et al.
0

Attribution methods shed light on the explainability of data-driven approaches such as deep learning models by revealing the most contributing features to decisions that have been made. A widely accepted way of deriving feature attributions is to analyze the gradients of the target function with respect to input features. Analysis of gradients requires full access to the target system, meaning that solutions of this kind treat the target system as a white-box. However, the white-box assumption may be untenable due to security and safety concerns, thus limiting their practical applications. As an answer to the limited flexibility, this paper presents GEEX (gradient-estimation-based explanation), an explanation method that delivers gradient-like explanations under a black-box setting. Furthermore, we integrate the proposed method with a path method. The resulting approach iGEEX (integrated GEEX) satisfies the four fundamental axioms of attribution methods: sensitivity, insensitivity, implementation invariance, and linearity. With a focus on image data, the exhaustive experiments empirically show that the proposed methods outperform state-of-the-art black-box methods and achieve competitive performance compared to the ones with full access.

READ FULL TEXT

page 4

page 5

page 7

page 14

page 15

page 16

page 17

research
06/05/2019

Don't Paint It Black: White-Box Explanations for Deep Learning in Computer Security

Deep learning is increasingly used as a basic building block of security...
research
11/27/2022

Foiling Explanations in Deep Neural Networks

Deep neural networks (DNNs) have greatly impacted numerous fields over t...
research
10/16/2020

Evaluating Attribution Methods using White-Box LSTMs

Interpretability methods for neural networks are difficult to evaluate b...
research
11/07/2021

Look at the Variance! Efficient Black-box Explanations with Sobol-based Sensitivity Analysis

We describe a novel attribution method which is grounded in Sensitivity ...
research
06/08/2023

Sound Explanation for Trustworthy Machine Learning

We take a formal approach to the explainability problem of machine learn...
research
08/09/2023

Generative Perturbation Analysis for Probabilistic Black-Box Anomaly Attribution

We address the task of probabilistic anomaly attribution in the black-bo...
research
02/24/2022

A Rigorous Study of Integrated Gradients Method and Extensions to Internal Neuron Attributions

As the efficacy of deep learning (DL) grows, so do concerns about the la...

Please sign up or login with your details

Forgot password? Click here to reset