Learning from What's Right and Learning from What's Wrong

12/28/2021
by   Bart Jacobs, et al.
0

The concept of updating (or conditioning or revising) a probability distribution is fundamental in (machine) learning and in predictive coding theory. The two main approaches for doing so are called Pearl's rule and Jeffrey's rule. Here we make, for the first time, mathematically precise what distinguishes them: Pearl's rule increases validity (expected value) and Jeffrey's rule decreases (Kullback-Leibler) divergence. This forms an instance of a more general distinction between learning from what's right and learning from what's wrong. The difference between these two approaches is illustrated in a mock cognitive scenario.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/06/2013

Jeffrey's rule of conditioning generalized to belief functions

Jeffrey's rule of conditioning has been proposed in order to revise a pr...
research
11/17/2017

Automatic Pill Reminder for Easy Supervision

In this paper we present a working model of an automatic pill reminder a...
research
03/20/2013

About Updating

Survey of several forms of updating, with a practical illustrative examp...
research
09/21/2020

Measuring justice in machine learning

How can we build more just machine learning systems? To answer this ques...
research
09/13/2023

Pearl's and Jeffrey's Update as Modes of Learning in Probabilistic Programming

The concept of updating a probability distribution in the light of new e...
research
10/04/2018

Lower and Upper Conditioning in Quantum Bayesian Theory

Updating a probability distribution in the light of new evidence is a ve...
research
12/24/2017

Judicious Judgment Meets Unsettling Updating: Dilation, Sure Loss, and Simpson's Paradox

Statistical learning using imprecise probabilities is gaining more atten...

Please sign up or login with your details

Forgot password? Click here to reset