
Personalized explanation in machine learning
Explanation in machine learning and related fields such as artificial in...
read it

Generating Decision Structures and Causal Explanations for Decision Making
This paper examines two related problems that are central to developing ...
read it

Using machine learning to make constraint solver implementation decisions
Programs to solve socalled constraint problems are complex pieces of so...
read it

How to Manipulate CNNs to Make Them Lie: the GradCAM Case
Recently many methods have been introduced to explain CNN decisions. How...
read it

'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions
Datadriven decisionmaking consequential to individuals raises importan...
read it

Neuroscientific User Models: The Source of Uncertain User Feedback and Potentials for Improving Web Personalisation
In this paper we consider the neuroscientific theory of the Bayesian bra...
read it

Explanation of Probabilistic Inference for Decision Support Systems
An automated explanation facility for Bayesian conditioning aimed at imp...
read it
TheoryBased Inductive Learning: An Integration of Symbolic and Quantitative Methods
The objective of this paper is to propose a method that will generate a causal explanation of observed events in an uncertain world and then make decisions based on that explanation. Feedback can cause the explanation and decisions to be modified. I call the method TheoryBased Inductive Learning (TBIL). TBIL integrates deductive learning, based on a technique called ExplanationBased Generalization (EBG) from the field of machine learning, with inductive learning methods from Bayesian decision theory. TBIL takes as inputs (1) a decision problem involving a sequence of related decisions over time, (2) a training example of a solution to the decision problem in one period, and (3) the domain theory relevant to the decision problem. TBIL uses these inputs to construct a probabilistic explanation of why the training example is an instance of a solution to one stage of the sequential decision problem. This explanation is then generalized to cover a more general class of instances and is used as the basis for making the nextstage decisions. As the outcomes of each decision are observed, the explanation is revised, which in turn affects the subsequent decisions. A detailed example is presented that uses TBIL to solve a very general stochastic adaptive control problem for an autonomous mobile robot.
READ FULL TEXT
Comments
There are no comments yet.