Analysis of Explainers of Black Box Deep Neural Networks for Computer Vision: A Survey

11/27/2019
by   Vanessa Buhrmester, et al.
45

Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized to be non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificial datasets, often with bias or contaminated discriminating content. Through their increased distribution, decision-making algorithms can contribute promoting prejudge and unfairness which is not easy to notice due to lack of transparency. Hence, scientists developed several so-called explanators or explainers which try to point out the connection between input and output to represent in a simplified way the inner structure of machine learning black boxes. In this survey we differ the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.

READ FULL TEXT

page 2

page 3

page 7

page 10

page 11

page 12

page 14

page 17

research
01/17/2022

Black-box error diagnosis in deep neural networks: a survey of tools

The application of Deep Neural Networks (DNNs) to a broad variety of tas...
research
09/11/2018

Visualizing Convolutional Neural Networks to Improve Decision Support for Skin Lesion Classification

Because of their state-of-the-art performance in computer vision, CNNs a...
research
03/13/2019

Improving Transparency of Deep Neural Inference Process

Deep learning techniques are rapidly advanced recently, and becoming a n...
research
03/30/2022

How Deep is Your Art: An Experimental Study on the Limits of Artistic Understanding in a Single-Task, Single-Modality Neural Network

Mathematical modeling and aesthetic rule extraction of works of art are ...
research
11/16/2020

A Survey on the Explainability of Supervised Machine Learning

Predictions obtained by, e.g., artificial neural networks have a high ac...
research
06/30/2020

Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey

While Deep Neural Networks (DNNs) achieve state-of-the-art results in ma...
research
03/08/2023

Learning Hybrid Interpretable Models: Theory, Taxonomy, and Methods

A hybrid model involves the cooperation of an interpretable model and a ...

Please sign up or login with your details

Forgot password? Click here to reset