DeepAI AI Chat
Log In Sign Up

Minimum Encoding Approaches for Predictive Modeling

by   Peter D. Grünwald, et al.

We analyze differences between two information-theoretically motivated approaches to statistical inference and model selection: the Minimum Description Length (MDL) principle, and the Minimum Message Length (MML) principle. Based on this analysis, we present two revised versions of MML: a pointwise estimator which gives the MML-optimal single parameter model, and a volumewise estimator which gives the MML-optimal region in the parameter space. Our empirical results suggest that with small data sets, the MDL approach yields more accurate predictions than the MML estimators. The empirical results also demonstrate that the revised MML estimators introduced here perform better than the original MML estimator suggested by Wallace and Freeman.


page 1

page 8


Introduction to minimum message length inference

The aim of this manuscript is to introduce the Bayesian minimum message ...

SMML estimators for exponential families with continuous sufficient statistics

The minimum message length principle is an information theoretic criteri...

The Minimum Description Length Principle for Pattern Mining: A Survey

This is about the Minimum Description Length (MDL) principle applied to ...

Change-points analysis for generalized integer-valued autoregressive model via minimum description length principle

This article considers the problem of modeling a class of nonstationary ...

Risk-averse estimation, an axiomatic approach to inference, and Wallace-Freeman without MML

We define a new class of Bayesian point estimators, which we refer to as...

A simple data discretizer

Data discretization is an important step in the process of machine learn...

Minimum Description Length Revisited

This is an up-to-date introduction to and overview of the Minimum Descri...