DeepAI AI Chat
Log In Sign Up

Possibilistic Networks: Parameters Learning from Imprecise Data and Evaluation strategy

by   Maroua Haddad, et al.
GMX CaraMail
University of Nantes

There has been an ever-increasing interest in multidisciplinary research on representing and reasoning with imperfect data. Possibilistic networks present one of the powerful frameworks of interest for representing uncertain and imprecise information. This paper covers the problem of their parameters learning from imprecise datasets, i.e., containing multi-valued data. We propose in the rst part of this paper a possibilistic networks sampling process. In the second part, we propose a likelihood function which explores the link between random sets theory and possibility theory. This function is then deployed to parametrize possibilistic networks.


page 1

page 2

page 3

page 4


D numbers theory: a generalization of Dempster-Shafer theory

Dempster-Shafer theory is widely applied to uncertainty modelling and kn...

Representing Heuristic Knowledge in D-S Theory

The Dempster-Shafer theory of evidence has been used intensively to deal...

Using Dempster-Shafer Theory in Knowledge Representation

In this paper, we suggest marrying Dempster-Shafer (DS) theory with Know...

Compiling Possibilistic Networks: Alternative Approaches to Possibilistic Inference

Qualitative possibilistic networks, also known as min-based possibilisti...

Cluster-based Specification Techniques in Dempster-Shafer Theory

When reasoning with uncertainty there are many situations where evidence...

Identification and Interpretation of Belief Structure in Dempster-Shafer Theory

Mathematical Theory of Evidence called also Dempster-Shafer Theory (DST)...

A global approach for learning sparse Ising models

We consider the problem of learning the link parameters as well as the s...