Finite-time optimality of Bayesian predictors

12/20/2018
by   Daniil Ryabko, et al.
0

The problem of sequential probability forecasting is considered in the most general setting: a model set C is given, and it is required to predict as well as possible if any of the measures (environments) in C is chosen to generate the data. No assumptions whatsoever are made on the model class C, in particular, no independence or mixing assumptions; C may not be measurable; there may be no predictor whose loss is sublinear, etc. It is shown that the cumulative loss of any possible predictor can be matched by that of a Bayesian predictor whose prior is discrete and is concentrated on C, up to an additive term of order n, where n is the time step. The bound holds for every n and every measure in C. This is the first non-asymptotic result of this kind. In addition, a non-matching lower bound is established: it goes to infinity with n but may do so arbitrarily slow.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/26/2016

Things Bayes can't do

The problem of forecasting conditional probabilities of the next event g...
research
08/09/2014

Characterizing predictable classes of processes

The problem is sequence prediction in the following setting. A sequence ...
research
12/24/2009

On Finding Predictors for Arbitrary Families of Processes

The problem is sequence prediction in the following setting. A sequence ...
research
11/17/2019

Learning The MMSE Channel Predictor

We present a neural network based predictor which is derived by starting...
research
07/03/2019

Stabilization Time in Minority Processes

We analyze the stabilization time of minority processes in graphs. A min...
research
10/03/2016

Data Integration with High Dimensionality

We consider a problem of data integration. Consider determining which ge...

Please sign up or login with your details

Forgot password? Click here to reset