# Optimal Learning from the Doob-Dynkin lemma

The Doob-Dynkin Lemma gives conditions on two functions X and Y that ensure existence of a function ϕ so that X = ϕ∘ Y. This communication proves different versions of the Doob-Dynkin Lemma, and shows how it is related to optimal statistical learning algorithms. Keywords and phrases: Improper prior, Descriptive set theory, Conditional Monte Carlo, Fiducial, Machine learning, Complex data.

## Authors

• 10 publications
• ### Monte Carlo Gradient Estimation in Machine Learning

This paper is a broad and accessible survey of the methods we have at ou...
06/25/2019 ∙ by Shakir Mohamed, et al. ∙ 10

• ### Optimal Monte Carlo Estimation of Belief Network Inference

We present two Monte Carlo sampling algorithms for probabilistic inferen...
02/13/2013 ∙ by Malcolm Pradhan, et al. ∙ 0

• ### Efficient risk estimation via nested multilevel quasi-Monte Carlo simulation

We consider the problem of estimating the probability of a large loss fr...
11/24/2020 ∙ by Zhenghang Xu, et al. ∙ 0

• ### Markov chains in random environment with applications in queueing theory and machine learning

We prove the existence of limiting distributions for a large class of Ma...
11/11/2019 ∙ by Attila Lovas, et al. ∙ 0

• ### mlOSP: Towards a Unified Implementation of Regression Monte Carlo Algorithms

We introduce mlOSP, a computational template for Machine Learning for Op...
12/01/2020 ∙ by Mike Ludkovski, et al. ∙ 33

• ### A quasi-Monte Carlo data compression algorithm for machine learning

We introduce an algorithm to reduce large data sets using so-called digi...
04/06/2020 ∙ by Josef Dick, et al. ∙ 0

• ### Analogical Dissimilarity: Definition, Algorithms and Two Experiments in Machine Learning

This paper defines the notion of analogical dissimilarity between four o...
01/15/2014 ∙ by Laurent Miclet, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

This note is motivated by the commutative diagram

 (1)

which is of fundamental importance in probability, statistics, and data science. If

is the data in an experiment, then is also data by definition if . If and are measurable, then it follows as a consequence that the composition is measurable. The Doob-Dynkin lemma (Doob, 1953, p.603) (Kallenberg, 2002, p.7) gives conditions on and that ensures existence of a such that . In the next section we prove different versions of the Doob-Dynkin lemma, and in the final section we briefly discuss the role of the Doob-Dynkin lemma in statistics. The lemma provides in particular existence and uniqueness of optimal data learning algorithms.

## 2 The Doob-Dynkin lemma

Consider first the case where and are continuous functions between topological spaces (Kuratowski, 1966; Kelley, 1955). The following Lemma is probably known in some context, but I have no reference for this. A similar comment holds for many other results presented in the following.

###### Lemma 1 (Topological Doob-Dynkin).

If the image is a space, and is continuous with respect to the initial topology of , then there exists a unique continuous such that .

###### Proof.

Let , , , and define . It must be demonstrated that this gives a well-defined continuous . Assume that . It must be proved that . Assume for contradiction that . From -separation there exists an open separating . Assume without loss of generality that . It follows that so which contradicts from the assumption . Existence of an open such that follows since is continuous with respect to the initial topology of . This also gives which proves continuity of . ∎

The previous result can also be proved for more general cases, including in particular spaces equipped with the family of co-zero sets from suitable families of real valued functions (Taraldsen, 2017). This includes the case where and are measurable functions between measurable spaces (Halmos, 1950; Dunford and Schwartz, 1988; Rudin, 1987). The simplicity of the following result - and the fact that it seems to be missing from the standard presentations linked to conditional expectation (Halmos, 1950; Doob, 1953; Loeve, 1977; Kallenberg, 2002; Rao and Swift, 2006) - was part of the original motivation for writing this note. It should be noted that the function obtained from the Lemma is only defined on the image , and not on the whole set . The reward is a more general statement - and a simpler proof.

###### Lemma 2 (Measurable Doob-Dynkin).

If the image is and is measurable with respect to the initial -field of , then there exists a unique measurable such that .

###### Proof.

The proof is identical with the topological version, but the sets and in the identity are measurable. Furthermore, and separation are equivalent since the complement of a measurable set is measurable. ∎

The separation assumption seems to be the natural general assumption for the proof presented here. Consideration of the trivial topology gives as a result that all functions are continuous with respect to any given function , and in particular with respect to a constant function . If takes at least two values, then it is impossible to find a such that . A continuation of this argument gives separation assumptions that are not only sufficient, but also necessary. The argument in the example holds also for the case of measurable spaces.

Consider next the case where is a measurable space equipped with a -finite measure : There are measurable with and . This includes the common case of a probability space , but includes also the case of a Renyi space (Renyi, 1970) as needed in a theory of statistics that includes improper priors (Taraldsen and Lindqvist, 2010, 2016; Taraldsen, Tufto and Lindqvist, 2017).

The space is assumed to be a measurable space, and it becomes a measure space when equipped with the law of a measurable . The law is defined by for , and is said to be -finite if is -finite.

The initial -field of is given by . The following result can be used as a substitute for the use of the Doob-Dynkin Lemma in the context of conditional expectation, and represents the second main motivation for writing this note. It gives an alternative approach to the one usually followed in standard texts (Halmos, 1950; Doob, 1953; Rao and Swift, 2006), and the proof is again much shorter. It should in particular be observed that the resulting measurable is defined not only on the image as in Lemma 2, but on the whole space . The space from Lemma 2 is in the below replaced by the extended real interval equipped with the Borel sets generated by the open intervals.

###### Lemma 3 (Conditional expectation Doob-Dynkin).

Let and be measurable. If is -finite with initial -field , then there exists a unique (a.e.) measurable such that .

###### Proof.

The Radon-Nikodym theorem (Rudin, 1987, p.121) gives a unique (a.e.) such that for all measurable since the left-hand side defines a measure which is absolutely continuous with respect to which is assumed to be -finite. The general change-of-variables theorem gives then which gives the claim . ∎

It should be noted that Kolmogorov (1933, p.53) defines the conditional expectation directly as given by the above proof. Later writers, such as Doob (1953, p.17-18), defines first a conditional expectation , and then as a special case. The Doob-Dynkin Lemma is then needed to finally define . The advantage of the original approach of Kolmogorov is that, as in the above proof, existence and uniqueness (a.e.) of is proved directly without having to refer to a Doob-Dynkin type Lemma.

The result in Lemma 3 is generalized directly to the setting where is replaced by a separable Banach space by duality and decomposition of a complex number in four unique components in . An even more general version follows also as a consequence by an alternative argument.

###### Lemma 4 (a.e. Doob-Dynkin).

Let be a -finite measure space. If is contained in a standard Borel space and is measurable with respect to the initial -field of a -finite , then there exists a unique (a.e.) measurable such that .

###### Proof.

The characterization theorem of standard Borel spaces (Kechris, 1995, p.90) shows that it is sufficient to consider . The assumptions and Lemma 3 give . ∎

A standard Borel space is a space equipped with the -field of a separable complete metric space. A measurable set of a standard Borel space is also a standard Borel space (Kechris, 1995, p.75). The previous Lemma can be generalized by an alternative proof which is essentially the proof given by Doob (1953, p.603) for a less general statement.

###### Lemma 5 (Standard Doob-Dynkin).

If is contained in a standard Borel space and is measurable with respect to the initial -field of a measurable , then there exists a measurable such that .

###### Proof.

Assume first that is in the relative -field from . The separation assumption gives a countable partition such that . The measurable gives then such that . The partition corresponds to a partition . Each can be replaced by so it can be assumed that the are disjoint and then that . The required can finally be defined by for .

The characterization theorem of standard Borel spaces (Kechris, 1995, p.90) shows that it is sufficient to consider for the general case. For this case it follows then that for a monotone increasing sequence of simple functions that are all measurable. The above argument gives such that , and gives the claim. ∎

Lemma 2 does not provide a measurable since the image may fail to be measurable. If, however, it is assumed that is contained in a standard Borel space, then the Kuratowski extension theorem Kechris (1995, p.73) ensures that there exists a measurable extension of . It follows hence that Lemma 2 combined with the Kuratowski extension theorem gives an alternative proof of Lemma 5. Alternatively, Lemma 5, can be used to obtain a proof of the Kuratowski extension theorem.

A natural question next: Is it possible to generalize Lemma 4 by relaxing the conditions on ? This would then also give an alternative to the Kuratowski extension theorem for the case where is a -finite measure space. Lemma 4 provides a that extends the unique in the sense that almost everywhere on . This is a weaker result than the Kuratowski extension theorem. The following argument gives, unfortunately, only an alternative proof of Lemma 4.

Assume that is contained in a space that contains a family of indicator functions that separates points and generates the field. Lemma 3 gives and this determines uniqueness of a measurable from for all for . The are chosen such that on and . Unfortunately, completeness is here needed to ensure existence of , and the result is hence only an alternative proof of Lemma 4.

An alternative attempt is to consider a generalization of the Kuratowski extension theorem via an extension of Lemma 3 to the case of a possibly non-separable Hilbert space. It gives a measurable , but the problem is that the good domain of will depend on

. It is only separability that ensures existence of a countable family of vectors

that can determine on a good domain : on where . The conclusion is that neither the completeness nor the separability assumptions are easily removed even when relaxing the requirements in the Kuratowski extension theorem into an almost everywhere statement. It is possible that a version can be obtained by completing the -field , but we leave this question open.

## 3 Optimal learning from data

A statistical model is given by the structure (Taraldsen, Tufto and Lindqvist, 2017)

 [row sep=normal, ampersand replacement=&] &Ω_Θ[r, ”ψ”] & Ω_Γ (Ω, E, P) [ur, ”Θ”] [drr, ”X” near end] [dr, ”Y”’] [urr, ”Γ”’ near end] & & &Ω_Y [r, ”ϕ”’] & Ω_X (2)

The uncertainty is modeled by the law on the space from which an unknown has been drawn. The model data is observed and the aim is to determine such that this gives optimal learning about the focus parameter where is the unknown model parameter.

Bayesian analysis is given by assuming that the prior law and the conditional data distribution is specified, or more generally that the joint law

is specified. Optimal learning in the sense of estimating

can be defined by attempting to find an optimal action that minimizes the Bayes risk (Berger, 1985, p.11)

 r=E∥Γ−X∥2=E∥ψ(Θ)−ϕ(Y)∥2 (3)

where it is assumed that is a separable Hilbert space. The assumption means in particular that is measurable with respect to the initial -field of . If is -finite, then it follows that is a closed subspace of and the projection

 X=E(Γ\operatornamewithlimits∣EY)=ϕ(Y) (4)

is the minimizer of the Bayes risk. Existence of a required follows from Lemma 2, but the other versions of the Doob-Dynkin Lemma can also be used. It should be observed here that the argument is more general than usual since the probability space of Kolmogorov has been replaced by a Renyi space .

The previous includes also the case of the Kalman-Bucy filter as described in more detail by Øksendal (2014, p.81-108). The unknown parameter is then at a given time , and the data is the observations for all of a stochastic process which is a filtered and noisy version of for . The optimal solution is again given by equation (4) (Øksendal, 2014, p.83, Theorem 6.1.2)

. The actual calculation for the 1-dimensional Kalman-Bucy filter involves solving a nonlinear ordinary differential equation which gives the coefficients of a stochastic differential equation that determines the solution based on the observations

(Øksendal, 2014, p.96, Theorem 6.2.8).

The main reason for mentioning the Kalman-Bucy filter is that it corresponds to a case where both the model parameter space and the the data space are infinite dimensional. They can both in this application be identified with the set of continuous paths indexed with a time parameter (Øksendal, 2014, p.22), but for some applications it is more appropriate to use a space of tempered distributions. The concept of a random tempered distribution can be further generalized using the ideas of Skorohod (1984) for strong linear random operators which generalizes the concept of a random operator.

Consider again the Bayes risk in equation (3). If it is assumed that is -finite, then the following decomposition holds

 r=E(E(∥ψ(Θ)−ϕ(Y)∥2\operatornamewithlimits∣Y))=∫ryPY(dy) (5)

It follows that the Bayes risk is minimized if the Bayes posterior risk

 ry=E(∥ψ(Θ)−ϕ(y)∥2\operatornamewithlimits∣Y=y) (6)

is minimized for each . This gives the explicit solution

 ϕ(y)=Eyψ(Θ) (7)

It should be observed that the Bayes posterior risk can be minimized and uniformly finite even in cases where the Bayes risk in equation (3) is infinite. Minimization of the Bayes posterior loss is hence a more generally applicable procedure for determining a decision rule that gives optimal learning.

A simple example is given by where

is drawn from a standard one dimensional normal distribution. If the prior for

is Lebesgue measure on the real line, then the posterior for equals the fiducial distribution : The posterior equals a distribution. Consider the simplest case where , which gives from equation (7). The Bayes posterior risk is then from equation (6), and the Bayes risk . The latter follows since the marginal law of is also Lebesgue measure on the real line.

This example can be generalized to a general location problem, including general linear regression, and even more general kinds of group models.In the case of an infinite dimensional Hilbert space

the invariant measure does not exist, but for this case the fiducial posterior loss can be used as a substitute for the Bayes posterior loss, and gives optimal frequentist inference (Taraldsen and Lindqvist, 2013).

Optimal frequentist inference can be defined as given by a that minimizes the frequentist risk

 rθ=Eθ(∥ϕ(Y)−γ∥2)=E(∥ψ(θ)−ϕ(Y)∥2\operatornamewithlimits∣Θ=θ) (8)

uniformly for each model parameter

. The quadratic loss function is here used for simplicity, and many alternatives exist depending on the kind of inference in particular problems. Restrictions on the class of allowable functions

are commonly given by demanding unbiasedness or equivariance with respect to a group action (Taraldsen and Lindqvist, 2013). It follows that an optimal frequentist action , if it exists, will also minimize the Bayes risk since a -finite ensures

 r=∫rθPΘ(dθ) (9)

In many cases, however, there exists no optimal frequentist action. A good alternative is often given by the optimal Bayesian posterior action as can be inferred from the previous arguments. The prior is then chosen not based on prior knowledge, but so that it gives appropriate weight to regions in the model parameter space that are considered important.

## References

• Berger (1985) [author] Berger, J. O.J. O. (1985). Statistical decision theory and Bayesian analysis. Springer (second edition).
• Doob (1953) [author] Doob, J. L.J. L. (1953). Stochastic Processes. Wiley Classics Library Edition (1990). Wiley.
• Dunford and Schwartz (1988) [author] Dunford, N.N. Schwartz, J. T.J. T. (1988). Linear Operators, part I-III. Wiley Classics Library. Wiley-Interscience.
• Halmos (1950) [author] Halmos, P. R.P. R. (1950). Measure Theory. Van Nostrand Reinhold.
• Kallenberg (2002) [author] Kallenberg, K.K. (2002). Foundations of Modern Probability, second ed. Springer.
• Kechris (1995) [author] Kechris, A. S.A. S. (1995). Classical Descriptive Set Theory. Springer.
• Kelley (1955) [author] Kelley, J. L.J. L. (1955). General Topology. The University Series in Higher Mathematics. Van Nostrand Reinhold.
• Kolmogorov (1933) [author] Kolmogorov, A.A. (1933). Foundations of the theory of probability, Second ed. Chelsea edition (1956).
• Kuratowski (1966) [author] Kuratowski, K.K. (1966). Topology I-II. Academic Press.
• Loeve (1977)

[author] Loeve, M.M. (1977). Probability Theory I-II, 4th ed. Springer.

• Øksendal (2014) [author] Øksendal, B.B. (2014). Stochastic Differential Equations: An Introduction with Applications, 6th ed. Springer.
• Rao and Swift (2006) [author] Rao, M. M.M. M. Swift, R. J.R. J. (2006). Probability Theory with Applications. Springer.
• Renyi (1970) [author] Renyi, A.A. (1970). Foundations of Probability. Holden-Day.
• Rudin (1987) [author] Rudin, W.W. (1987). Real and Complex Analysis. McGraw-Hill.
• Skorohod (1984) [author] Skorohod, A. V.A. V. (1984). Random Linear Operators, 1 ed. Springer.
• Taraldsen (2017) [author] Taraldsen, G.G. (2017). Nonlinear probability. A theory with incompatible stochastic variables. arXiv:1706.06770.
• Taraldsen and Lindqvist (2010) [author] Taraldsen, G.G. Lindqvist, B. H.B. H. (2010). Improper Priors Are Not Improper. The American Statistician 64 154–158. 10.1198/tast.2010.09116
• Taraldsen and Lindqvist (2013) [author] Taraldsen, G.G. Lindqvist, B. H.B. H. (2013). Fiducial theory and optimal inference. Annals of Statistics 41 323–341.
• Taraldsen and Lindqvist (2016) [author] Taraldsen, G.G. Lindqvist, B. H.B. H. (2016). Conditional probability and improper priors. Communications in Statistics: Theory and Methods 45 5007-5016.
• Taraldsen, Tufto and Lindqvist (2017) [author] Taraldsen, G.G., Tufto, J.J. Lindqvist, B. H.B. H. (2017). Improper posteriors are not improper. arXiv:1710.08933.