DeepAI AI Chat
Log In Sign Up

Statistical Inference of Minimally Complex Models

by   Clélia de Mulatier, et al.

Finding the best model that describes a high dimensional dataset, is a daunting task. For binary data, we show that this becomes feasible, if the search is restricted to simple models. These models – that we call Minimally Complex Models (MCMs) – are simple because they are composed of independent components of minimal complexity, in terms of description length. Simple models are easy to infer and to sample from. In addition, model selection within the MCMs' class is invariant with respect to changes in the representation of the data. They portray the structure of dependencies among variables in a simple way. They provide robust predictions on dependencies and symmetries, as illustrated in several examples. MCMs may contain interactions between variables of any order. So, for example, our approach reveals whether a dataset is appropriately described by a pairwise interaction model.


page 7

page 9


The Stochastic complexity of spin models: How simple are simple spin models?

Simple models, in information theoretic terms, are those with a small st...

Statistical inference with F-statistics when fitting simple models to high-dimensional data

We study linear subset regression in the context of the high-dimensional...

Post-selection inference on high-dimensional varying-coefficient quantile regression model

Quantile regression has been successfully used to study heterogeneous an...

Moderated Network Models

Pairwise network models such as the Gaussian Graphical Model (GGM) are a...

Graph model selection by edge probability sequential inference

Graphs are widely used for describing systems made up of many interactin...

High-Dimensional Inference with the generalized Hopfield Model: Principal Component Analysis and Corrections

We consider the problem of inferring the interactions between a set of N...