MFPP: Morphological Fragmental Perturbation Pyramid for Black-Box Model Explanations

06/04/2020
by   Qing Yang, et al.
4

With the increasing popularity of deep neural networks (DNNs), it has recently been applied to many advanced and diverse tasks, such as medical diagnosis, automatic pilot etc. Due to the lack of transparency of the deep models, it causes serious concern about widespread deployment of ML/DL technologies. In this work, we address the Explainable AI problem of black-box classifiers which take images as input and output probabilities of classes. We propose a novel technology, the Morphological Fragmental Perturbation Pyramid (MFPP), in which we segment input image into different scales of fragments and randomly mask them as perturbation to generate an importance map that indicates how salient each pixel is for prediction results of the black-box DNNs. Compared to existing input sampling perturbation methods, this pyramid structure fragmentation has proven to be more efficient and it can better explore the morphological information of input image to match its semantic information, while it does not require any values inside model. We qualitatively and quantitatively demonstrate that MFPP matches and exceeds the performance of state-of-the-art black-box explanation methods on multiple models and datasets.

READ FULL TEXT

page 2

page 3

page 4

research
06/19/2018

RISE: Randomized Input Sampling for Explanation of Black-box Models

Deep neural networks are increasingly being used to automate data analys...
research
09/18/2022

EMaP: Explainable AI with Manifold-based Perturbations

In the last few years, many explanation methods based on the perturbatio...
research
12/18/2019

Iterative and Adaptive Sampling with Spatial Attention for Black-Box Model Explanations

Deep neural networks have achieved great success in many real-world appl...
research
01/19/2021

PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack

The studies on black-box adversarial attacks have become increasingly pr...
research
01/19/2021

Attention-Guided Black-box Adversarial Attacks with Large-Scale Multiobjective Evolutionary Optimization

Fooling deep neural networks (DNNs) with the black-box optimization has ...
research
03/13/2019

Improving Transparency of Deep Neural Inference Process

Deep learning techniques are rapidly advanced recently, and becoming a n...
research
05/09/2020

Estimating g-Leakage via Machine Learning

This paper considers the problem of estimating the information leakage o...

Please sign up or login with your details

Forgot password? Click here to reset