Theory-Based Inductive Learning: An Integration of Symbolic and Quantitative Methods

03/27/2013
by   Spencer Star, et al.
0

The objective of this paper is to propose a method that will generate a causal explanation of observed events in an uncertain world and then make decisions based on that explanation. Feedback can cause the explanation and decisions to be modified. I call the method Theory-Based Inductive Learning (T-BIL). T-BIL integrates deductive learning, based on a technique called Explanation-Based Generalization (EBG) from the field of machine learning, with inductive learning methods from Bayesian decision theory. T-BIL takes as inputs (1) a decision problem involving a sequence of related decisions over time, (2) a training example of a solution to the decision problem in one period, and (3) the domain theory relevant to the decision problem. T-BIL uses these inputs to construct a probabilistic explanation of why the training example is an instance of a solution to one stage of the sequential decision problem. This explanation is then generalized to cover a more general class of instances and is used as the basis for making the next-stage decisions. As the outcomes of each decision are observed, the explanation is revised, which in turn affects the subsequent decisions. A detailed example is presented that uses T-BIL to solve a very general stochastic adaptive control problem for an autonomous mobile robot.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro