DeepAI AI Chat
Log In Sign Up

Possibilistic Networks: Parameters Learning from Imprecise Data and Evaluation strategy

07/13/2016
by   Maroua Haddad, et al.
GMX CaraMail
University of Nantes
0

There has been an ever-increasing interest in multidisciplinary research on representing and reasoning with imperfect data. Possibilistic networks present one of the powerful frameworks of interest for representing uncertain and imprecise information. This paper covers the problem of their parameters learning from imprecise datasets, i.e., containing multi-valued data. We propose in the rst part of this paper a possibilistic networks sampling process. In the second part, we propose a likelihood function which explores the link between random sets theory and possibility theory. This function is then deployed to parametrize possibilistic networks.

READ FULL TEXT

page 1

page 2

page 3

page 4

02/14/2014

D numbers theory: a generalization of Dempster-Shafer theory

Dempster-Shafer theory is widely applied to uncertainty modelling and kn...
03/13/2013

Representing Heuristic Knowledge in D-S Theory

The Dempster-Shafer theory of evidence has been used intensively to deal...
03/27/2013

Using Dempster-Shafer Theory in Knowledge Representation

In this paper, we suggest marrying Dempster-Shafer (DS) theory with Know...
03/15/2012

Compiling Possibilistic Networks: Alternative Approaches to Possibilistic Inference

Qualitative possibilistic networks, also known as min-based possibilisti...
05/16/2003

Cluster-based Specification Techniques in Dempster-Shafer Theory

When reasoning with uncertainty there are many situations where evidence...
07/12/2017

Identification and Interpretation of Belief Structure in Dempster-Shafer Theory

Mathematical Theory of Evidence called also Dempster-Shafer Theory (DST)...
06/26/2019

A global approach for learning sparse Ising models

We consider the problem of learning the link parameters as well as the s...