Conditional mean embeddings as regressors - supplementary

05/21/2012 ∙ by Steffen Grunewalder, et al. ∙ 0

We demonstrate an equivalence between reproducing kernel Hilbert space (RKHS) embeddings of conditional distributions and vector-valued regressors. This connection introduces a natural regularized loss function which the RKHS embeddings minimise, providing an intuitive understanding of the embeddings and a justification for their use. Furthermore, the equivalence allows the application of vector-valued regression methods and results to the problem of learning conditional distributions. Using this link we derive a sparse version of the embedding by considering alternative formulations. Further, by applying convergence results for vector-valued regression to the embedding problem we derive minimax convergence rates which are O((n)/n) -- compared to current state of the art rates of O(n^-1/4) -- and are valid under milder and more intuitive assumptions. These minimax upper rates coincide with lower rates up to a logarithmic factor, showing that the embedding method achieves nearly optimal rates. We study our sparse embedding algorithm in a reinforcement learning task where the algorithm shows significant improvement in sparsity over an incomplete Cholesky decomposition.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction/Motivation

In recent years a framework for embedding probability distributions into reproducing kernel Hilbert spaces (RKHS) has become increasingly popular

(Smola et al., 2007). One example of this theme has been the representation of conditional expectation operators as RKHS functions, known as conditional mean embeddings (Song et al., 2009)

. Conditional expectations appear naturally in many machine learning tasks, and the RKHS representation of such expectations has two important advantages: first, conditional mean embeddings do not require solving difficult intermediate problems such as density estimation and numerical integration; and second, these embeddings may be used to compute conditional expectations directly on the basis of observed samples. Conditional mean embeddings have been successfully applied to inference in graphical models, reinforcement learning, subspace selection, and conditional independence testing

(Fukumizu et al., 2008, 2009; Song et al., 2009, 2010; Grünewälder et al., 2012).

The main motivation for conditional means in Hilbert spaces has been to generalize the notion of conditional expectations from finite cases (multivariate Gaussians, conditional probability tables, and so on). Results have been established for the convergence of these embeddings in RKHS norm (Song et al., 2009, 2010), which show that conditional mean embeddings behave in the way we would hope (i.e., they may be used in obtaining conditional expectations as inner products in feature space, and these estimates are consistent under smoothness conditions). Despite these valuable results, the characterization of conditional mean embeddings remains incomplete, since these embeddings have not been defined in terms of the optimizer of a given loss function. This makes it difficult to extend these results, and has hindered the use of standard techniques like cross-validation for parameter estimation.

In this paper, we demonstrate that the conditional mean embedding is the solution of a vector-valued regression problem with a natural loss, resembling the standard Tikhonov regularized least-squares problem in multiple dimensions. Through this link, it is possible to access the rich theory of vector-valued regression (Micchelli & Pontil, 2005; Carmeli et al., 2006; Caponnetto & De Vito, 2007; Caponnetto et al., 2008). We demonstrate the utility of this connection by providing novel characterizations of conditional mean embeddings, with important theoretical and practical implications. On the theoretical side, we establish novel convergence results for RKHS embeddings, giving a significant improvement over the rate of due to Song et al. (2009, 2010). We derive a faster rate which holds over large classes of probability measures, and requires milder and more intuitive assumptions. We also show our rates are optimal up to a term, following the analysis of Caponnetto & De Vito (2007). On the practical side, we derive an alternative sparse version of the embeddings which resembles the Lasso method, and provide a cross-validation scheme for parameter selection.

2 Background

In this section, we recall some background results concerning RKHS embeddings and vector-valued RKHS. For an introduction to scalar-valued RKHS we refer the reader to (Berlinet & Thomas-Agnan, 2004).

2.1 Conditional mean embeddings

Given sets and , with a distribution

over random variables

from we consider the problem of learning expectation operators corresponding to the conditional distributions on after conditioning on . Specifically, we begin with a kernel , with corresponding RKHS , and study the problem of learning, for every , the conditional expectation mapping . Each such map can be represented as

where the element is called the (conditional) mean embedding of . Note that, for every , is a function on . It is thus apparent that is a mapping from to , a point which we will expand upon shortly.

We are interested in the problem of estimating the embeddings given an i.i.d. sample drawn from . Following (Song et al., 2009, 2010), we define a second kernel with associated RKHS , and consider the estimate

(1)

where , and where , , and is a chosen regularization parameter. This expression suggests that the conditional mean embedding is the solution to an underlying regression problem: we will formalize this link in Section 3. In the remainder of the present section, we introduce the necessary terminology and theory for vector valued regression in RHKSs.

2.2 Vector-valued regression and RKHSs

We recall some background on learning vector-valued functions using kernel methods (see Micchelli & Pontil, 2005, for more detail). We are given a sample drawn i.i.d. from some distribution over , where is a non-empty set and is a Hilbert space. Our goal is to find a function with low error, as measured by

(2)

This is the vector-valued regression problem (square loss).

One approach to the vector-valued regression problem is to model the regression function as being in a vector-valued RKHS of functions taking values in , which can be defined by analogy with the scalar valued case.

  • A Hilbert space of functions is an RKHS if for all the linear functional is continuous.

The reproducing property for vector-valued RHKSs follows from this definition (see Micchelli & Pontil, 2005, Sect. 2). By the Riesz representation theorem, for each and , there exists a linear operator from to written , such that for all ,

It is instructive to compare to the scalar-valued RKHS , for which the linear operator of evaluation mapping to is continuous: then Riesz implies there exists a such that .

We next introduce the vector-valued reproducing kernel, and show its relation to . Writing as the space of bounded linear operators from to , the reproducing kernel is defined as

From this definition and the reproducing property, the following holds (Micchelli & Pontil, 2005, Prop. 2.1).

Proposition 2.1.

A function is a kernel if it satisfies: (i) , (ii) for all we have that .

It is again helpful to consider the scalar case: here, , and to every positive definite kernel there corresponds a unique (up to isometry) RKHS for which is the reproducing kernel. Similarly, if is a kernel in the sense of Proposition 2.1, there exists a unique (up to isometry) RKHS, with as its reproducing kernel (Micchelli & Pontil, 2005, Th. 2.1). Furthermore, the RKHS can be described as the RKHS limit of finite sums; that is, is up to isometry equal to the closure of the linear span of the set , wrt the RKHS norm .

Importantly, it is possible to perform regression in this setting. One approach to the vector-valued regression problem is to replace the unknown true error (2) with a sample-based estimate , restricting to be an element of an RKHS (of vector-valued functions), and regularizing w.r.t. the norm, to prevent overfitting. We thus arrive at the following regularized empirical risk,

(3)
Theorem 2.2.

(Micchelli & Pontil, 2005, Th. 4) If minimises in then it is unique and has the form,

where the coefficients , are the unique solution of the system of linear equations

In the scalar case we have that . Similarly it holds that , where denotes the operator norm (Micchelli & Pontil, 2005, Prop. 1). Hence, if for all then

(4)

Finally, we need a result that tells us when all functions in an RKHS are continuous. In the scalar case this is guaranteed if is continuous for all and is bounded. In our case we have (Carmeli et al., 2006)[Prop. 12]:

Corollary 2.3.

If is a Polish space, a separabe Hilbert space and the mapping is continuous, then is a subset of the set of continuous functions from to .

3 Estimating conditional expectations

In this section, we show the problem of learning conditional mean embeddings can be naturally formalised in the framework of vector-valued regression, and in doing so we derive an equivalence between the conditional mean embeddings and a vector-valued regressor.

3.1 The equivalence between conditional mean embeddings and a vector-valued regressor

Conditional expectations are linear in the argument so that, when we consider , the Riesz representation theorem implies the existence of an element such that for all . That being said, the dependence of on may be complicated. A natural optimisation problem associated to this approximation problem is to therefore find a function such that the following objective is small

(5)

Note that the risk function cannot be used directly for estimation, because we do not observe , but rather pairs drawn from . However, we can bound this risk function with a surrogate risk function that has a sample based version,

(6)

where the first and second bounds follow by Jensen’s and Cauchy-Schwarz’s inequalities, respectively. Let us denote this surrogate risk function as

(7)

The two risk functions and are closely related and in Section 3.3 we examine their relation.

We now replace the expectation in (6) with an empirical estimate, to obtain the sample-based loss,

(8)

Taking (8) as our empirical loss, then following Section 2.2 we add a regularization term to provide a well-posed problem and prevent overfitting,

(9)

We denote the minimizer of (9) by ,

(10)

Thus, recalling (3), we can see that the problem (10) is posed as a vector-valued regression problem with the training data now considered as (and we identify with the general Hilbert space of Section 2.2). From Theorem 2.2, the solution is

(11)

where the coefficients , are the unique solution of the linear equations

It remains to choose the kernel . Given a real-valued kernel on , a natural choice for the RKHS would be the space of functions from to whose elements are defined as functions via , which is isomorphic to , with inner product

(12)

for all . Its easy to check that this satisfies the conditions to be a vector-valued RKHS– in fact it corresponds to the choice , where is the identity map on . The solution to (10) with this choice is then given by (11), with

where , which corresponds exactly the embeddings (1) presented in (Song et al., 2009, 2010) (after a rescaling of ). Thus we have shown that the embeddings of Song et al. are the solution to a regression problem for a particular choice of operator-valued kernel. Further, the loss defined by (7) is a key error functional in this context since it is the objective which the estimated embeddings attempt to minimise. In Sec. 3.3 we will see that this does not always coincide with (5) which may be a more natural choice. In Sec. 4 we analyze the performance of the embeddings defined by (10) at minimizing the objective (7).

3.2 Some consequences of this equivalence

We derive some immediate benefits from the connection described above. Since the embedding problem has been identified as a regression problem with training set , we can define a cross validation scheme for parameter selection in the usual way: by holding out a subsample , we can train embeddings on over a grid of kernel or regularization parameters, choosing the parameters achieving the best error on the validation set (or over many folds of cross validation). Another key benefit will be a much improved performance analysis of the embeddings, presented in Section 4.

Input/Output space (i) is Polish.
(ii) is separable. (f.b.a.)
(iii) such that
    holds.
Space of regressors (iv) is separable.
(v) All are HS. (f.b.a.)
(vi)
   is measurable .
True distribution (vii) for all .
(viii) .
Table 1: Assumptions for Corollary 4.1 and 4.2. f.b.a. stands for fulfilled by assumption that is finite dimensional.

3.3 Relations between the error functionals and

In Section 3.1 we introduced an alternative risk function for , which we used to derive an estimation scheme to recover conditional mean embeddings. We now examine the relationship between the two risk functionals. When the true conditional expectation on functions can be represented through an element then minimises both objectives.

Theorem 3.1 (Proof in App. A).

If there exists a such that for any : -a.s., then is the -a.s. unique minimiser of both objectives:

Thus in this case, the embeddings of Song et al. (e.g. 2009) minimise both (5) and (7). More generally, however, this may not be the case. Let us define an element that is close w.r.t. the error to the minimizer of in (this might for instance be the minimizer of the empirical regularized loss for sufficiently many samples). We are interested in finding conditions under which is not much worse than a good approximation in to the conditional expectation. The sense in which approximates the conditional expectation is somewhat subtle: must closely approximate the conditional expectation of functions under the original loss (note that the loss was originally defined in terms of functions ).

Theorem 3.2 (Proof in App. A).

Let be a minimiser of and be an element of with . Define, , then

Apart from the more obvious condition that be small, the above theorem suggests that should also be made small for the solution to have low error . In other words, even in the infinite sample case, the regularization of in is important.

4 Better convergence rates for embeddings

The interpretation of the mean embedding as a vector valued regression problem allows us to apply regression minimax theorems to study convergence rates of the embedding estimator. These rates are considerably better than the current state of the art for the embeddings, and hold under milder and more intuitive assumptions.

We start by comparing the statements which we derive from (Caponnetto & De Vito, 2007, Thm.s 1 and 2) with the known convergence results for the embedding estimator. We follow this up with a discussion of the rates and a comparison of the assumptions.

4.1 Convergence theorems

We address the performance of the embeddings defined by (10) in terms of asymptotic guarantees on the loss defined by (7). Caponnetto & De Vito (2007) study uniform convergence rates for regression. Convergence rates of learning algorithms can not be uniform on the set of all probability distributions if the output vector space is an infinite dimensional RKHS (Caponnetto & De Vito, 2007)[p. 4]. It is therefore necessary to restrict ourselves to a subset of probability measures. This is done by Caponnetto & De Vito (2007) by defining families of probability measures indexed by two parameters and . We discuss the family

in detail below. The important point at the moment is that

and affect the optimal schedule for the regulariser and the convergence rate. The rate of convergence is better for higher and values. Caponnetto & De Vito (2007) provide convergence rates for all choices of and . We restrict ourself to the best case and the worst case111Strictly speaking the worst case is (see supp.). .

We recall that the estimated conditional mean embeddings are given by (10), where is a chosen regularization parameter. We assume is chosen to follow a specific schedule, dependent upon : we denote by the embeddings following this schedule and . Given this optimal rate of decrease for , Thm. 1 of Caponnetto & De Vito (2007) yields the following convergence statements for the estimated embeddings, under assumptions to be discussed in Section 4.2.

Corollary 4.1.

Let then for every there exists a constant such that

Let and then for every there exists a constant such that

The rate for the estimate can be complemented with minimax lower rates for vector valued regression (Caponnetto & De Vito, 2007)[Th. 2] in the case that .

Corollary 4.2.

Let and and let be the set of all learning algorithm working on samples, outputting . Then for every there exists a constant such that

This corollary tells us that there exists no learning algorithm which can achieve better rates than uniformly over , and hence the estimate is optimal up to a logarithmic factor.

State of the art results for the embedding

The current convergence result for the embedding is proved by Song et al. (2010, Th.1). A crucial assumption that we discuss in detail below is that the mapping is in the RKHS of the real valued kernel, i.e. that for all we have that there exists a , such that

(13)

The result of Song et al. implies the following (see App. C): if for all and the schedule is used: for a fixed probability measure , there exists a constant such that

(14)

where is the estimate from Song et al. No complementary lower bounds were known until now.

Comparison

The first thing to note is that under the assumption that is in the RKHS the minimiser of and are a.e. equivalent due to Theorem 3.1: the assumption implies a exists with for all (see App. B.4 for details). Hence, under this assumption, the statements from eq. 14 and Cor. 4.1 ensure we converge to the true conditional expectation, and achieve an error of 0 in the risk .

In the case that this assumption is not fullfilled and eq. 14 is not applicable, Cor. 4.1 still tells us that we converge to the minimiser of . Coupling this statement with Thm. 3.2 allows us to bound the distance to the minimal error , where minimises .

The other main differences are obviously the rates, and that Cor. 4.1 bounds the error uniformly over a space of probability measures, while eq. 14 provides only a point-wise statement (i.e., for a fixed probability measure ).

4.2 Assumptions

Cor. 4.1 and 4.2

Our main assumption is that is finite dimensional. It is likely that this assumption can be weakened, but this requires a deeper analysis.

The assumptions of Caponnetto & De Vito (2007) are summarized in Table 1, where we provide details in App. B.2. App. B.1 contains simple and complete assumptions that ensure all statements in the table hold. Beside some measure theoretic issues, the assumptions are fulfilled if for example, 1) is a compact subset of , is compact, is a finite dimensional RKHS, and are continuous; 2) . This last condition is unintuitive, but can be rewritten in the following way:

Theorem 4.3 (Proof in App.).

Let be integrable for all and let be finite dimensional. Then there exists a with iff a exists and a sequence with and .

The intuition is that the condition is not fulfilled if we need to make more and more complex (in the sense of a high RKHS norm) to optimize the risk.

Definition and discussion of

The family of probability measures is characterized through spectral properties of the kernel function

. The assumptions correspond to assumptions on the eigenvalues in Mercer’s theorem in the real valued case, i.e. that there are finitely many eigenvalues or that the eigenvalues decrease with a certain rate. In detail, define the operator

through , where . can be written as (Caponnetto & De Vito, 2007, Rem. 2) where the inner product is the inner product with measure and is allowed. As in the real valued case, the eigendecomposition depends on the measure on the space but is independent of the distribution on . The eigendecomposition measures the complexity of the kernel, where the lowest complexity is achieved for finite — that is, the case — and has highest complexity if the eigenvalues decrease with the slowest possible rate, for a constant . The case correspond to a slightly faster decay, namely, . In essence, there are no assumptions on the distribution on , but only on the complexity of the kernel as measured with .

Embedding

The results of Song et al. (2010) do not rely on the assumption that is finite dimensional. Other conditions on the distribution are required, however, which are challenging to verify. To describe these conditions, we recall the real-valued RKHS with kernel , and define the uncentred cross-covariance operator such that , with the covariance operator defined by analogy. One of the two main assumptions of Song et al. is that needs to be Hilbert-Schmidt. The covariances and are compact operators, meaning is not invertible when is infinite dimensional (this gives rise to a notational issue, although the “product” operator may still be defined). Whether is Hilbert-Schmidt (or even bounded) will depend on the underlying distribution and on the kernels and . At this point, however, there is no easy way to translate properties of to guarantees that the assumption holds.

The second main assumption is that the conditional expectation can be represented as an RKHS element (see App B.4). Even for rich RKHSs (such as universal RKHSs), it can be challenging to determine the associated conditions on the distribution . For simple finite dimensional RKHSs, the assumption may fail, as shown below.

Corollary 4.4 (Proof in App. C ).

Let be finite dimensional such that a function exists with for all . Furthermore, let and the reproducing kernel for be . Then there exists no measure for which the assumption from eq. (13) can be fulfilled.

5 Sparse embeddings

In many practical situations it is desirable to approximate the conditional mean embedding by a sparse version which involves a smaller number of parameters. For example, in the context of reinforcement learning and planning, the sample size is large and we want to use the embeddings over and over again, possibly on many different tasks and over a long time frame.

Here we present a technique to achieve a sparse approximation of the sample mean embedding. Recall that this is given by the formula (cf. equation (11))

where . A natural approach to find a sparse approximation of is to look for a function which is close to according to the RKHS norm (in App. D we establish a link between this objective and our cost function ). In the special case that this amounts to solving the optimization problem

(15)

where is a positive parameter, and

(16)

Problem (15) is equivalent to a kind of Lasso problem with variables: when , at the optimum and the approximation error is zero, however as increases, the approximation error increases as well, but the solution obtained becomes sparse (many of the elements of matrix are equal to zero).

A direct computation yields that the above optimization problem is equivalent to

(17)

In the experiments in Section 5.1, we solve problem (17) with FISTA (Beck & Teboulle, 2009), an optimal first order method which requires iterations to reach a accuracy of the minimum value in (17), with a cost per iteration of in our case. The algorithm is outlined below, where and if and zero otherwise.

  input: , , , output:
  
  for t=1,2,… do
     
     
     
  end for
Algorithm 1 LASSO-like Algorithm

Other sparsity methods could also be employed. For example, we may replace the norm by a block norm. That is, we may choose the norm , which is the sum of the norms of the rows of . This penalty function encourages sparse approximations which use few input points but all the outputs. Similarly, the penalty will sparsify over the outputs. Finaly, if we wish to remove many pair of examples we may use the more sophisticated penalty .

Figure 1: Comparison between our sparse algorithm and an incomplete Cholesky decomposition. The x-axis shows the level of sparsity, where on the right side the original solution is recovered. The y-axis shows the distance to the dense solution and the test error relative to the dense solution.

5.1 Experiment

We demonstrate here that the link between vector-valued regression and the mean embeddings can be leveraged to develop useful embedding alternatives that exploit properties of the regression formulation: we apply the sparse algorithm to a challenging reinforcement learning task. The sparse algorithm makes use of the labels, while other algorithms to sparsify the embeddings without our regression interpretation cannot make use of these. In particular, a popular method to sparsify is the incomplete Cholesky decomposition (Shawe-Taylor & Cristianini, 2004), which sparsifies based on the distribution on the input space only. We compare to this method in the experiments.

The reinforcement learning task is the under-actuated pendulum swing-up problem from Deisenroth et al. (2009). We generate a discrete-time approximation of the continuous-time pendulum dynamics as done in (Deisenroth et al., 2009). Starting from an arbitrary state the goal is to swing the pendulum up and balance it in the inverted position. The applied torque is and is not sufficient for a direct swing up. The state space is defined by the angle and the angular velocity, . The reward is given by the function . The learning algorithm is a kernel method which uses the mean embedding estimator to perform policy iteration (Grünewälder et al., 2012). Sparse solutions are in this task very useful as the policy iteration applies the mean embedding many times to perform updates. The input space has dimensions (sine and cosine of the angle, angular velocity and applied torque), while the output has (sine and cosine of the angle and angular velocity). We sample uniformly a training set of examples and learn the mean conditional embedding using the direct method (1). We then compare the sparse approximation obtained by Algorithm 1 using different values of the parameter to the approximation obtained via an incomplete Cholesky decomposition at different levels of sparsity. We assess the approximations using the test error and (16), which is an upper bound on the generalization error (see App. D) and report the results in Figure 1.

6 Outlook

We have established a link between vector-valued regression and conditional mean embeddings. On the basis of this link, we derived a sparse embedding algorithm, showed how cross-validation can be performed, established better convergence rates under milder assumptions, and complemented these upper rates with lower rates, showing that the embedding estimator achieves near optimal rates.

There are a number of interesting questions and problems which follow from our framework. It may be valuable to employ other kernels in place of the kernel that leads to the mean embedding, so as to exploit knowledge about the data generating process. As a related observation, for the kernel , is not a Hilbert-Schmidt operator if is infinite dimensional, as

however the convergence results from (Caponnetto & De Vito, 2007) assume to be Hilbert-Schmidt. While this might simply be a result of the technique used in (Caponnetto & De Vito, 2007), it might also indicate a deeper problem with the standard embedding estimator, namely that if is infinite dimensional then the rates degrade. The latter case would have a natural interpretation as an over-fitting effect, as does not “smooth” the element .

Our sparsity approach can potentially be equipped with other regularisers that cut down on rows and columns of the matrix in parallel. Certainly, ours is not the only sparse regression approach, and other sparse regularizers might yield good performance on appropriate problems.

Acknowledgements

The authors want to thank for the support of the EPSRC #EP/H017402/1 (CARDyAL) and the European Union #FP7-ICT-270327 (Complacs).

References

  • Beck & Teboulle (2009) Beck, A. and Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal of Imaging Sciences, 2(1):183–202, 2009.
  • Berlinet & Thomas-Agnan (2004) Berlinet, A. and Thomas-Agnan, C. Reproducing kernel Hilbert spaces in Probability and Statistics. Kluwer, 2004.
  • Caponnetto & De Vito (2007) Caponnetto, A. and De Vito, E. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331–368, 2007.
  • Caponnetto et al. (2008) Caponnetto, A., Micchelli, C.A., Pontil, M., and Ying, Y. Universal multi-task kernels. JMLR, 9, 2008.
  • Carmeli et al. (2006) Carmeli, C., De Vito, E., and Toigo, A. Vector valued reproducing kernel Hilbert spaces of integrable functions and mercer theorem. Analysis and Applications, 4(4):377–408, 2006.
  • Deisenroth et al. (2009) Deisenroth, M. P., Rasmussen, C. E., and Peters, J. Gaussian process dynamic programming. Neurocomputing, 72(7-9), 2009.
  • Fremlin (2000) Fremlin, D.H. Measure Theory Volume 1: The Irreducible Minimum. Torres Fremlin, 2000.
  • Fremlin (2003) Fremlin, D.H. Measure Theory Volume 4: Topological Measure Spaces. Torres Fremlin, 2003.
  • Fukumizu et al. (2008) Fukumizu, K., Gretton, A., Sun, X., and Schölkopf, B. Kernel measures of conditional dependence. In NIPS 20, 2008.
  • Fukumizu et al. (2009) Fukumizu, K., Bach, F., and Jordan, M. Kernel dimension reduction in regression. The Annals of Statistics, 37(4), 2009.
  • Fukumizu et al. (2011) Fukumizu, K., Song, L., and Gretton, A. Kernel Bayes’ rule. pp. 1737–1745, 2011.
  • Grünewälder et al. (2012) Grünewälder, S., Lever, G., Baldassarre, L., Pontil, M., and Gretton, A. Modelling transition dynamics in mdps with rkhs embeddings. In ICML, 2012.
  • Kallenberg (2001) Kallenberg, O. Foundations of Modern Probability. Springer, 2nd edition, 2001.
  • Micchelli & Pontil (2005) Micchelli, C. A. and Pontil, M. On learning vector-valued functions. Neural Computation, 17(1):177–204, 2005.
  • Shawe-Taylor & Cristianini (2004) Shawe-Taylor, J. and Cristianini, N. Kernel Methods for Pattern Analysis. Cambridge University Press, 2004.
  • Smola et al. (2007) Smola, A., Gretton, A., Song, L., and Schölkopf, B. A Hilbert space embedding for distributions. In ALT. Springer, 2007.
  • Song et al. (2009) Song, L., Huang, J., Smola, A. J., and Fukumizu, K. Hilbert space embeddings of conditional distributions with applications to dynamical systems. In ICML, pp. 121, 2009.
  • Song et al. (2010) Song, L., Gretton, A., and Guestrin, C. Nonparametric tree graphical models. AISTATS, 9, 2010.
  • Steinwart & Christmann (2008) Steinwart, I. and Christmann, A. Support Vector Machines. Springer, 2008.
  • Werner (2002) Werner, D. Funktionalanalysis. Springer, 4th edition, 2002.

Appendix A Similarity of minimisers

We assume that all are integrable with respect to the regular conditional probability and all are integrable with respect to . In particular, these are fulfilled if our general assumptions from Section B.1 are fulfilled.

Lemma A.1.

If there exists such that for any : -a.s., then for any :

Proof.

(i) follows from (ii) by setting . (ii) can be derived like that:

where we used Fubini’s theorem for the last equality (Kallenberg, 2001, Thm. 1.27). ∎

Theorem A.2 (Thm 3.1).

If there exists a such that for any : -a.s., then

Furthermore, the minimiser is -a.s. unique.

Proof.

We start by showing that the right side is minimised by . Let be any element in then we have

Hence, is a minimiser. The minimiser is furthermore -a.s. unique: Assume there is a second minimiser then above calculation shows that . Hence, -a.s. (Fremlin, 2000)[122Rc], i.e. a measurable set with exists such that holds for all . As is a norm we have that -a.s..

To show the second equality we note that, for every , by assumption. Hence, the supremum over is also 0 and the minimum is attained at . Uniqueness can be seen in the following way: assume there is a second minimiser then for all we have . Hence, -a.s. (Fremlin, 2000)[122Rc], i.e. a measurable set with exists such that holds for all . Assume that there exists a such that then pick and we have