Bayesian Interpolants as Explanations for Neural Inferences

04/08/2020
by   Kenneth L. McMillan, et al.
0

The notion of Craig interpolant, used as a form of explanation in automated reasoning, is adapted from logical inference to statistical inference and used to explain inferences made by neural networks. The method produces explanations that are at the same time concise, understandable and precise.

READ FULL TEXT

page 6

page 9

research
03/27/2013

Strategies for Generating Micro Explanations for Bayesian Belief Networks

Bayesian Belief Networks have been largely overlooked by Expert Systems ...
research
08/13/2019

Scalable Explanation of Inferences on Large Graphs

Probabilistic inferences distill knowledge from graphs to aid human make...
research
03/27/2013

Explanation of Probabilistic Inference for Decision Support Systems

An automated explanation facility for Bayesian conditioning aimed at imp...
research
03/22/2018

Calibrating Model-Based Inferences and Decisions

As the frontiers of applied statistics progress through increasingly com...
research
05/19/2021

Complementary Structure-Learning Neural Networks for Relational Reasoning

The neural mechanisms supporting flexible relational inferences, especia...
research
07/06/2020

Inferences and Modal Vocabulary

Deduction is the one of the major forms of inferences and commonly used ...
research
12/28/2010

Looking for plausibility

In the interpretation of experimental data, one is actually looking for ...

Please sign up or login with your details

Forgot password? Click here to reset