LioNets: Local Interpretation of Neural Networks through Penultimate Layer Decoding

06/15/2019
by   Ioannis Mollas, et al.
0

Technological breakthroughs on smart homes, self-driving cars, health care and robotic assistants, in addition to reinforced law regulations, have critically influenced academic research on explainable machine learning. A sufficient number of researchers have implemented ways to explain indifferently any black box model for classification tasks. A drawback of building agnostic explanators is that the neighbourhood generation process is universal and consequently does not guarantee true adjacency between the generated neighbours and the instance. This paper explores a methodology on providing explanations for a neural network's decisions, in a local scope, through a process that actively takes into consideration the neural network's architecture on creating an instance's neighbourhood, that assures the adjacency among the generated neighbours and the instance.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/06/2009

How to Explain Individual Classification Decisions

After building a classifier with modern tools of machine learning we typ...
research
07/31/2019

Local Interpretation Methods to Machine Learning Using the Domain of the Feature Space

As machine learning becomes an important part of many real world applica...
research
07/21/2020

SUBPLEX: Towards a Better Understanding of Black Box Model Explanations at the Subpopulation Level

Understanding the interpretation of machine learning (ML) models has bee...
research
07/02/2023

CLIMAX: An exploration of Classifier-Based Contrastive Explanations

Explainable AI is an evolving area that deals with understanding the dec...
research
04/14/2023

KS-GNNExplainer: Global Model Interpretation Through Instance Explanations On Histopathology images

Instance-level graph neural network explainers have proven beneficial fo...
research
07/27/2018

Interpreting RNN behaviour via excitable network attractors

Machine learning has become a basic tool in scientific research and for ...
research
05/29/2019

Generating Contrastive Explanations with Monotonic Attribute Functions

Explaining decisions of deep neural networks is a hot research topic wit...

Please sign up or login with your details

Forgot password? Click here to reset