DeepAI AI Chat
Log In Sign Up

Inductive Inference and the Representation of Uncertainty

by   Norman C. Dalkey, et al.

The form and justification of inductive inference rules depend strongly on the representation of uncertainty. This paper examines one generic representation, namely, incomplete information. The notion can be formalized by presuming that the relevant probabilities in a decision problem are known only to the extent that they belong to a class K of probability distributions. The concept is a generalization of a frequent suggestion that uncertainty be represented by intervals or ranges on probabilities. To make the representation useful for decision making, an inductive rule can be formulated which determines, in a well-defined manner, a best approximation to the unknown probability, given the set K. In addition, the knowledge set notion entails a natural procedure for updating -- modifying the set K given new evidence. Several non-intuitive consequences of updating emphasize the differences between inference with complete and inference with incomplete information.


page 1

page 2

page 3

page 4

page 5

page 6

page 7


Models vs. Inductive Inference for Dealing With Probabilistic Knowledge

Two different approaches to dealing with probabilistic knowledge are exa...

On Nicod's Condition, Rules of Induction and the Raven Paradox

Philosophers writing about the ravens paradox often note that Nicod's Co...

Weighted regret-based likelihood: a new approach to describing uncertainty

Recently, Halpern and Leung suggested representing uncertainty by a weig...

A Representation of Uncertainty to Aid Insight into Decision Models

Many real world models can be characterized as weak, meaning that there ...

Dynamic Network Updating Techniques For Diagnostic Reasoning

A new probabilistic network construction system, DYNASTY, is proposed fo...