A Formal Approach to Explainability

01/15/2020
by   Lior Wolf, et al.
0

We regard explanations as a blending of the input sample and the model's output and offer a few definitions that capture various desired properties of the function that generates these explanations. We study the links between these properties and between explanation-generating functions and intermediate representations of learned models and are able to show, for example, that if the activations of a given layer are consistent with an explanation, then so do all other subsequent layers. In addition, we study the intersection and union of explanations as a way to construct new explanations.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/08/2021

Diagnostics-Guided Explanation Generation

Explanations shed light on a machine learning model's rationales and can...
04/25/2022

Generating and Visualizing Trace Link Explanations

Recent breakthroughs in deep-learning (DL) approaches have resulted in t...
01/06/2022

Topological Representations of Local Explanations

Local explainability methods – those which seek to generate an explanati...
12/10/2020

Influence-Driven Explanations for Bayesian Network Classifiers

One of the most pressing issues in AI in recent years has been the need ...
06/22/2020

Fanoos: Multi-Resolution, Multi-Strength, Interactive Explanations for Learned Systems

Machine learning becomes increasingly important to tune or even synthesi...
11/01/2019

What Gets Echoed? Understanding the "Pointers" in Explanations of Persuasive Arguments

Explanations are central to everyday life, and are a topic of growing in...
05/29/2017

Contextual Explanation Networks

We introduce contextual explanation networks (CENs)---a class of models ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.