Evaluation of Uncertain Inference Models I: PROSPECTOR

03/27/2013
by   Robert M. Yadrick, et al.
0

This paper examines the accuracy of the PROSPECTOR model for uncertain reasoning. PROSPECTOR's solutions for a large number of computer-generated inference networks were compared to those obtained from probability theory and minimum cross-entropy calculations. PROSPECTOR's answers were generally accurate for a restricted subset of problems that are consistent with its assumptions. However, even within this subset, we identified conditions under which PROSPECTOR's performance deteriorates.

READ FULL TEXT

page 1

page 2

page 3

page 4

page 5

research
03/27/2013

A Framework for Comparing Uncertain Inference Systems to Probability

Several different uncertain inference systems (UISs) have been developed...
research
03/27/2013

Experimentally Comparing Uncertain Inference Systems to Probability

This paper examines the biases and performance of several uncertain infe...
research
02/06/2013

Probability Update: Conditioning vs. Cross-Entropy

Conditioning is the generally agreed-upon method for updating probabilit...
research
03/27/2013

Uncertain Reasoning Using Maximum Entropy Inference

The use of maximum entropy inference in reasoning with uncertain informa...
research
05/16/2003

Cluster-based Specification Techniques in Dempster-Shafer Theory

When reasoning with uncertainty there are many situations where evidence...
research
02/17/2022

Finding a Battleship of Uncertain Shape

Motivated by a game of Battleship, we consider the problem of efficientl...
research
03/27/2013

The Role of Tuning Uncertain Inference Systems

This study examined the effects of "tuning" the parameters of the increm...

Please sign up or login with your details

Forgot password? Click here to reset