Optimizing variational representations of divergences and accelerating their statistical estimation

06/15/2020 ∙ by Jeremiah Birrell, et al. ∙ 0

Variational representations of distances and divergences between high-dimensional probability distributions offer significant theoretical insights and practical advantages in numerous research areas. Recently, they have gained popularity in machine learning as a tractable and scalable approach for training probabilistic models and statistically differentiate between data distributions. Their advantages include: 1) They can be estimated from data. 2) Such representations can leverage the ability of neural networks to efficiently approximate optimal solutions in function spaces. However, a systematic and practical approach to improving the tightness of such variational formulas, and accordingly accelerate statistical learning and estimation from data, is currently lacking. Here we develop a systematic methodology for building new, tighter variational representations of divergences. Our approach relies on improved objective functionals constructed via an auxiliary optimization problem. Furthermore, the calculation of the functional Hessian of objective functionals unveils the local curvature differences around the common optimal variational solution; this allows us to quantify and order relative tightness gains between different variational representations. Finally, numerical simulations utilizing neural network optimization demonstrate that tighter representations can result in significantly faster learning and more accurate estimation of divergences in both synthetic and real datasets (of more than 700 dimensions), often accelerated by nearly an order of magnitude.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

Introduction

Divergences and distances between multivariate probability distributions play a central role in many mathematical, engineering, and scientific fields ranging from statistical physics, large deviations theory, and uncertainty quantification to information theory, statistics, and machine learning. Variational representation formulas for divergences, also referred to as dual formulations, convert divergence calculation into an optimization problem over a function space and offer a valuable mathematical tool to build, train, and analyze probabilistic models and measure similarity between data collections. Typical examples of variational representations are, among others, the Legendre transformation (LT) of an -divergence [1, 2], the Donsker-Varadhan (DV) formula for the Kullback-Leibler (KL) divergence [3, 4] and the Rubinstein-Kantorovich duality formula for Wasserstein distance [5]. Variational representations have been used in statistical mechanics and interacting particles [6], large deviations [4], divergence estimation [7, 8, 9], determining independence through mutual information estimation [10], adversarial learning of generative models [11, 12, 13], uncertainty quantification (UQ) of stochastic processes [14, 15], bounding risk in probably approximately correct (PAC) learning [16, 17, 18], as well as in parameter estimation [19].

There are two main mathematical ingredients needed for the construction of a variational formula. First, the function space where the optimal solution will be searched for and, second, the representation expression, called here the ‘objective functional’, whose optimization leads to the value of the divergence. Crucial practical advantages of variational formulas in statistics and machine learning include: a) they do not require an explicit form of the probability distributions (or their likelihood ratio); related probabilistic quantities can be approximated by statistical estimators over available data; b) they can exploit the capacity of rich regression models such as neural networks to efficiently search the function space for optimal solutions; the optimal solution is typically related to likelihood ratio.

A single divergence can be derived from several different objective functionals. The key contribution of this paper is a systematic methodology that uses families of transformations (e.g., shifts, affine, and powers) to build new, tighter variational representations for divergences by creating improved objective functionals, as described in our main Theorem 1. This idea is both simple and powerful; it provides a general framework that unifies many of the previous variational formulas in the literature, reveals new connections between them, drives the derivation of new variational formulas, and has practical implications in terms of accelerated statistical training, learning, and estimation from data.

Striking consequences of the proposed framework include: (i) the connection between LT-based KL, the DV representation formula, and to a new, improved DV-type formula, (ii) a concrete representation of the abstract objective functional in [9], and (iii) a derivation of new representation formulas for -divergence and connections with a recently derived, DV-type variational representation of Rényi divergences.

The improved objective functionals constructed via our framework have the same optimal solution, but they are tighter in the sense that the same approximation of the optimum will provide a better approximation of the divergence, i.e., they are flatter around the optimal solution. We propose to employ (functional) Hessians of the objective functionals to quantify and order relative tightness gains between different variational representations of divergences, in terms of the local curvature around the optimal solution.

Finally, we demonstrate that these tighter representation formulas can accelerate numerical optimization and estimation of divergences in a series of synthetic and real examples, such as the statistical estimation of f-divergences and mutual information, including cases with high-dimensional, real data (in excess of 700 dimensions). Similarly to [10], we parameterize the function space using neural networks, hence the (parameter) optimization is efficiently performed with a back-propagation algorithm. We find that the improved, tighter representation formulas converge several times faster than the initial representation formula, often by nearly an order of magnitude in high-dimensional problems.

1 Tightening the Variational Representation of f-Divergences

Background

Define to be the set of all convex functions with . If (resp. b) is finite, we extend to (resp. ) by continuity and set for . Such functions are appropriate for defining -divergences, , which have the variational characterization

(1)

where denotes the set of bounded measurable functions, [20, 8]. Under appropriate assumptions (see Theorem 4.4 in [20]), the maximum is achieved at

(2)

There are cases, such as -divergence with , where can be given a meaningful finite value even if (see [21]). However, the right hand side of (1) is always if and so here we use the convention that when . The special case of Kullback-Leibler (KL) divergence (i.e., with ) has a well-known alternative variational representation, the Donsker-Varadhan (DV) variational formula

(3)

It is known that the objective functional in (3) is tighter than that of (1), in the sense that for all , [9]. In this paper, we present a general procedure for obtaining tighter variational representations of any -divergence, for which the transition from (1) to (3) is just one special case and where we can derive even tighter representations than (3).

Theoretical Results

Our method for deriving tighter variational representations for -divergences is described in the following Theorem; a proof can be found in Section 4.

Theorem 1.

Let and suppose . With defined by Eq. (2) (when it exists), let be a family of functions (the test functions) with

(4)

Consider any family of transformations

(5)

that includes the identity map. Then

(6)
(7)

and the maximum in (6) is achieved at . Furthermore, the objective functional, , in the variational representation (6) is tighter than the objective functional in (1), in the sense that

(8)

for all .

(i) The main new insight and primary mathematical tool in this paper is formula (6), which allows for the objective functional to be ‘improved/tightened’ using any appropriate family of transformations ; examples of such families are discussed in the next subsection. Eq. (6) is a simple but far-reaching idea that reveals connections between many known variational representations and also leads to the derivation of new ones (see below, starting with Eq. (12)). (ii) The extension of Eq. (1) from to is useful because the exact optimizer, , is generally unbounded; various versions of this extension can be found in the literature [20, 8]. This extension is needed to justify the computation of variational derivatives around the optimum, see Section 2. It also implies that one does not need to impose boundedness condition via a cutoff function when employing neural-based statistical estimation.(iii) The generalization of Eq. (1) to a family, , that contains the optimizer is the natural next step; again, see [20, 8]. It provides a great deal of flexibility in adapting the proposed variational representation (6) to different -divergences and informs the algorithmic implementation. We use this idea several times to restrict the optimization to, e.g., positive functions for the -divergences and finite dimensional submanifolds for exponential families, see Section 3. (iv) If is a group under composition then the objective functional (7) is invariant under the family of transformations , i.e., for all . (v) If is a metric space with the Borel -algebra then one can replace with (bounded continuous functions) and with (continuous functions) in Theorem 1. (vi) The auxiliary optimization problem in Eq. (7) is often computed analytically; alternatively and due to its low dimensionality, the corresponding optimization can easily fit without significant additional computational cost, within gradient descent algorithms seeking for the optimal solution in .

Families of Transformations

Next, we present several useful families of transformations that, in conjunction with Theorem 1, yield tighter variational formulas.

  1. Identity: leads to what we call the standard (or LT) -divergence variational formula given by (1).

  2. Shifts: , , lead to what we call the shift or -improved variational formula

    (9)

    This result was first obtained in [22], and in this sense Theorem 1 is a broad and systematic generalization of Theorem 4.4 in [22], where only shift transformations where considered.

  3. Scaling Transformation: , , which lead to the scaling or -improved variational formula

    (10)
  4. Affine Transformation: The above two cases can be combined into a two parameter family .

  5. Power Transformation: , , which lead to the power or -improved variational formula

    (11)

    As with the affine transformations, the power transformations can be combined with the shift and/or scaling transformations to form a multiparameter family. These are related to the well-known Box-Cox transformation [23], used in statistics to transform non-normal data sets to approximately normal.

Deriving New and Existing Variational Representations

Here we explore several specific cases of the above framework, focusing primarily on examples where the optimization over the shift and/or scaling parameter can be done analytically. In doing so, we produce new variational representations, as well as uncover connections with previously known variational formulas. We do not claim these examples are exhaustive. Nevertheless, they cover many important cases and illustrate the power and flexibility of Theorem 1.

  1. Donsker-Varadhan formula (KL-divergence with shift transformations): KL-divergence is the -divergence corresponding to , which has Legendre transform . The maximum over the shift transformations in (6) occurs at , and hence

    (12)

    The result is the objective functional in the well-known Donsker-Varadhan variational formula (3) and so this framework provides the connection between (3) and (1). We also note that in [24] a connection is derived between (3) and (1) by using a logarithmic change of variables in the function space, based on (2) for the KL case.

  2. Improved Donsker-Varadhan (KL-Divergence with affine transformations): Introducing a scaling parameter into Eq. (12), i.e., optimizing over all affine transformations in (6), one finds the new KL variational representation

    (13)

    The inclusions implies that (13) is tighter than DV (3). Calculations and numerical results that quantify this improved tightness are found in Section 2. The optimization over in Eq. (13) cannot be evaluated analytically in general, but it can be done numerically (as discussed in Section 3) and can also be approximated analytically, leading to an alternative variational characterization of the KL divergence; see Appendix B for details.

  3. Connection with the results of [9]: In Theorem 1 of [9] the following improved variational formula was derived:

    (14)

    This is another special case of our framework, as can be seen by first rewriting the Legendre-Fenchel transform and then using Theorem 4.2 in [22]:

    (15)

    Hence the variational formula (14) is in fact the same as the -improved variational formula (9).

  4. Exponential Families: If and are members of a parametric family then the set of test functions, , in (6) can be reduced to a finite dimensional manifold. For instance, if and are members of the same exponential family with

    the vector of sufficient statistics then the explicit optimizer

    lies on an -dimensional manifold of functions, parameterized by the sufficient statistics and constants, , , and computation of the -divergence reduces to the following finite-dimensional optimization problem:

    (16)

    This variational representation can be further combined with any appropriate family of transformations ; see Appendix D for details.

  5. -Divergences (scaling transformations): The -divergences are the family of -divergences corresponding to . See [21] for properties, related families, and further references. It includes the KL, Hellinger and divergences as special cases, see [25], is closely related to the Tsallis entropies [26], and appears also in the context of information geometry, see [27]. By optimizing (6) over the family of scaling transformations, (restricted to ), we obtain a new variational representation of the -divergences:

    (17)

    where if and if . Eq. (17) has the exact optimizers . Theorem 1 guarantees that the objective functionals in the new variational representations (17) are tighter than that of the standard -divergence objective functional from (1). See Appendix A for details on the calculations that lead to Eq. (17), as well as for connections to the KL divergence in the limits as .

  6. Variational representations of Rényi divergences: Eq. (17) leads to a variational characterization of Rényi divergences. Using the known connection between the and Rényi divergences, along with Eq. (17) and the change of variables (see Appendix A for details) one obtains

    (18)

    This constitutes an independent derivation of the Renyi variational formula derived in [28, 24], while in the asymptotic limit one recovers (3). The Renyi variational formula (18) was also used in [29] to construct cumulant-based, generative adversarial networks.

  7. -Divergence: We use Theorem 1 to provide a new tight variational perspective on the classical Hammersley-Chapman-Robbins bound for the -divergence. The -divergence is a special case of the -divergences, . Here we optimize (6) over to obtain

    (19)

    where and the maximum is achieved at . Eq. (19) implies the Hammersley-Chapman-Robbins bound for the divergence (see, e.g., Eq. 4.13 in [30]) and shows tightness over the set . The objective functional in (19

    ) was proposed as loss function for

    -GANs, [31]; thus, (19) provides a complete and rigorous justification for this choice. Finally, if we instead optimize (6) over , we obtain the objective functional for derived in [24], which is thus less tight than (19); see Appendix A.3.

  8. Connections with Uncertainty Quantification: The improved DV representation (13) provides an alternative and arguably more general derivation of model uncertainty bounds derived recently in [14, 15]. These results quantify the effects of model uncertainty by bounding expectations of an observable under an alternative model, , in terms of the behavior under a baseline model and the model discrepancy, measured by . Specifically, (13) implies after straightforward manipulation that [14]. More generally, we can obtain similar UQ bounds when model discrepancy is measured by an f-divergence by using in (6) and performing the analogous manipulations:

    (20)

    The Hammersley-Chapman-Robbins bound can also be viewed as a special case of (20) in this UQ context. Similarly, UQ bounds for risk-sensitive functionals in terms of Rényi divergences, which were obtained recently in [32], readily follow from (18) after appropriate manipulations and an optimization over .

  9. -Divergences (scaling and power transformations): For , Combining the scaling and power family of transformations yields

    (21)

    The optimization over scalings was evaluated as in Eq. (17) but the optimization over the power transformations, , cannot be done analytically. In practice, can be included as an additional parameter in a numerical optimization procedure; see Section 3 for further discussion.

2 Variational Derivatives and Tightness Gains

In Theorem 1, we established the general methodology for building tighter variational representations of -divergences, by constructing suitable objective functionals . Here we will quantify relative tightness gains corresponding to different transformation families : for all such families the maximizer in (6) is always given by (2). Therefore, our approach relies on building quadratic variational approximations of each objective functional (7) around the common maximizer , and subsequently comparing the corresponding (variational) Hessians; see Figure 1 for a demonstration. Specifically, using that the maximum occurs at , an asymptotic expansion yields

(22)

where we define and is any functional perturbation of the maximizer . The second order term , i.e., the variational Hessian, is necessarily non-positive and determines the behavior in a neighborhood of the maximizer. By comparing for different families , we can quantify the ‘tightness gains’ provided by different transformation families. All calculations can be made rigorous under appropriate assumptions, but here we choose to operate formally to keep the discussion brief.

We focus our analysis on affine tranformations, , but a similar analysis can be performed for any family with a smooth, finite-dimensional parameterization. The standard -divergence variational formula (1) corresponds to containing only the identity, and we write the corresponding objective functional as . Specializing (7) to the affine case, we define the functional

(23)

which leads to four different objective functionals and variational representations of the -divergence

(24)

and the corresponding Hessians , , , and . Next we compute and compare these variational Hessians for the important case of KL divergence, where and . Detailed computations for general -divergences can be found in Appendix C.

Tightness gains for KL divergence:

(25)
(26)
(27)

corresponding to (1), (3), and (13), respectively. The gains inherent in the inclusions in Theorem 1 are quantified by comparing the variational curvatures Eq. (25), Eq. (26), and Eq. (27

); note that they are progressively smaller in magnitude. Curvature computations demonstrate how one can make rigorous and precisely quantify heuristics such as those presented in Figure 1 from

[9]. Our Hessian computations here also quantify and extend to -divergences the accuracy gains observed in the neural estimation of mutual information in [10]. Figure 1 shows a simple numerical example using 1D Gaussians: , , with perturbations in the directions (top) and (bottom). Optimizing over all affine transformations (red curves) provides significant curvature gains when compared to optimization over only shifts (blue curves), i.e., the improved DV proposed in (13) compared to the classical DV objective functional (3) and even more so compared to the Legendre transform case, (1) (black curves).

Figure 1: Both plots demonstrate the improvement of the KL divergence objective functional in a neighborhood of the optimizer. Here, where (top panel) and (bottom panel). and are -dimensional Gaussians. Black curves: standard -divergence objective functional, Blue curves: -improved (i.e., Donkser-Varadhan), Magenta curves: -improved, Red curves: -improved. Note that is related to by a shift and scaling, hence the -improved objective functional in the top plot has zero curvature in this direction, a manifestation of its shift and scale invariance.

3 Numerical Examples: Faster Statistical Estimation and Learning

Next we discuss practical implications of using tighter variational representations developed in Theorem 1, focusing on accelerating neural-based statistical learning and estimation. In recent works, variational representations such as (1) or (3) were used to estimate f-divergences and likelihood ratios based solely on available data [8]. This variational perspective proved also to be a crucial mathematical step in training generative adversarial networks (GAN) [11, 12, 13] and towards developing neural-based estimators for mutual information, [10], taking advantage of the ability of neural networks to search efficiently through function spaces.

Improved variational formulas for statistical estimation and learning were previously studied in: i) [9], using Eq. (14) and assuming a Hilbert space (RKHS) function space, ii) [10], where the DV and LT formulas for the KL divergence were used to estimate mutual information and improve learning. Both of these implicitly rely on the -improved variational formula (see Eq. (12) and Eq. (15)). Our Theorem 1 provides a broad generalization of these ideas to other transformation families, allows for practical implementation of the method in [9] to more general functions space parametrizations (e.g., neural networks), and generalizes the ideas in [10] to other -divergences beyond KL, where it can provide improved mutual information estimators based on (6).

In the following, we employ the outcomes from Sections 1 & 2 and build several variational neural network estimators, in the general spirit of [8, 10]. We demonstrate the performance improvements that result from representations such as Eq. (6). We start with the heuristic observation, illustrated in Figure 1, that tighter representations can improve the accuracy of statistical estimators for f-divergences, in the sense that the same approximation of the optimal will provide a better approximation of the divergence. Moreover, tighter variational formulas can lead to faster convergence of the search algorithm, as we now motivate: suppose one minimizes a convex function by the simple gradient descent algorithm . If is -Lipschitz (i.e., the Hessian is bounded by ) then this algorithm converges if , and the analysis suggests the optimal learning rate of and leads to the error bound (see, e.g., Theorem 3.3 in [33]). If has the same optimizer and optimal value, but has a smaller Hessian bound, , then the optimal learning rate, is larger, and the error bound after an equal number of steps is smaller, i.e., the use of in place of can lead to faster convergence.

The above argument is only heuristic; the constant learning rate algorithm is far from optimal in most cases and the above analysis does not capture the complexity of the current setting. Nonetheless, it does provide important insight into the numerical results, presented below, which demonstrate that, in practice, the improved variational formulas do generally lead to faster convergence of the estimators, letting all other factors be equal.

The examples below employ the new variational formulas (17) and (21) for -divergence. We focus on the well-known Hellinger divergence () but consider other cases as well (see also Appendix E

). Computations were done in TensorFlow using the AdamOptimizer

[34], an adaptive learning-rate SGD optimizer, with all methods given the same initial learning rate. When working with neural-network based estimators of -divergence, we enforce positivity of the test functions (see (17)) via the parameterization where

is a neural network family with ReLU activation functions. If the optimization over a parameterized family of transformations,

, cannot be done analytically then we solve the minimization problem (6)-(7

) by performing stochastic gradient descent (SGD) on the full collection of parameters,

. In such cases, our method can be thought of as enhancement of the neural network structure. The nested nature of the minimization over and also allows for more sophisticated methods (not explored here), e.g., for each one can perform several SGD steps for , thus solving the (generally low dimensional) problem (7) to high accuracy, before performing another SGD step for in (6); this is reminiscent of multiscale numerical methods [35].

Hellinger-MINE

In Figure 2, we present the computation of Hellinger mutual information (Hellinger-MI), , via neural network optimization. Typically the divergence in mutual information is chosen to be KL. However, one can consider a whole array of different f-divergences for this purpose, see for instance [36]. Here, and are correlated -dimensional Gaussians with component-wise correlation . The results demonstrate that, for a given computational budget (i.e., fixed number of SGD iterations) the improved variational formulas (red and blue) yield more accurate results, i.e., they converge faster than the standard -divergence method (1) (black). Moreover, optimizing over both scalings and powers (red) provides a non-trivial improvement over the scaling-improved method (blue). This is a generalization of the findings in [10], which compared the DV variational formula (3) with (1) for the KL divergence. We emphasize that despite the lack of an analytical formula for the optimization over , the inclusion of this single additional parameter in the variational formula leads to a clear performance gain.

Figure 2: Estimation of Hellinger-based mutual information between -dimensional correlated Gaussians with component-wise correlation . We use a fully-connected neural network with one hidden layer of 64 nodes while training is performed with a minibatch size of 100. We show the Hellinger MI as a function of after 1000 steps of SGD and averaged over 50 runs. The inset shows the relative error for , as a function of the number of SGD iterations.

Submanifold Parameterization for Exponential Families

Our method allows for a great deal of flexibility in the choice of function space parameterization. In a ‘small-data’ setting, the assumption of an exponential family structure can serve as an effective regularization. We illustrate this with Figure 4, which shows the estimation of the -divergence with between -dimensional Gaussians using a data set of 5000 samples from each distribution for SGD (minibatch size of 100) and using another 5000 samples for Monte Carlo estimation of the value of the objective functional. Using the submanifold estimation formula (16) and its -improved variant (see Appendix D) we obtain the magenta and red curves, respectively. In blue, we show the result from the -improved variational formula (17) and in black we show the result using the standard -divergence objective functional (1); both use neural network families with one fully connected hidden layer (5 nodes). The number of nodes was chosen so that all methods use approximately the same number of parameters. The neural network parameterization converges faster, but ends up with a larger bias than the submanifold parameterization. The -improved variational formulas lead to faster convergence than the standard variational formula in both cases as expected by our theory.

Figure 3: Estimation of the -divergence with between two

-dimensional Gaussians with randomly generated variances and one of the means randomly perturbed from zero. We compare the convergence performance between the neural network and submanifold parameterizations. Relative error was averaged over 50 runs.

Figure 4: Estimation of the Hellinger divergence between two distributions obtained by (iid) randomly translating the MNIST handwritten digits dataset [37]: each sample is a random translation of an MNIST image in the and directions (iid shifts, rounded to the nearest pixel and with periodic boundary conditions). Each step of SGD uses two independent minibatches of 100 such samples (one minibatch for and one for ). Monte Carlo estimation of the value of the Hellinger divergence was done using the corresponding objective functional and with samples coming from two separate datasets of 10000 randomly shifted MNIST images (one collection of images for and an independent collection for ). The function space was parameterized via fully-connected neural networks with one hidden layer of 128 nodes. The results were averaged over 50 runs.
Figure 3: Estimation of the -divergence with between two -dimensional Gaussians with randomly generated variances and one of the means randomly perturbed from zero. We compare the convergence performance between the neural network and submanifold parameterizations. Relative error was averaged over 50 runs.

MNIST Dataset

As a final example, we illustrate the accelerated speed of convergence on high-dimensional ( dimensional) realistic data by estimating the Hellinger divergence between two distributions obtained by (iid) randomly translating the MNIST handwritten digits image dataset [37]. This provides an effective test case wherein we know the exact answer (). Figure 4 shows the error, as a function of the number of SGD iterations, and once again demonstrates that the improved variational formulas lead to faster convergence; in this case, nearly one order of magnitude fewer SGD iterations are required to reach an accuracy of when using the tighter objective functionals. In practice, this means that one can more quickly detect whether or not the two data streams are in fact coming from the same distribution.

Concluding, we state that although the proposed optimization framework was applied on statistical learning and estimation, it can be of broader interest, among others, in epistemic uncertainty quantification [14], in coarse-graining and model reduction [38, 39, 40], as well as in PAC learning [18] and adversarial learning [11, 13].

4 Proof of Theorem 1

In this section we provide a detailed proof of Theorem 1 from the main text. For the convenience of the reader, we will recall the relevant definitions and notation below.

Let be probability measures on a measurable space and, for any define to be the set of convex functions with . If (resp. ) is finite, we extend to (resp. ) by continuity and set for . The result is a convex, lower semicontinuous function, . The -divergence of with respect to is defined by

(28)

Our starting point is the the following variational characterization [20, 8]:

(29)

where denotes the set of bounded measurable functions and

(30)

is the Legendre transform.

Remark 1.

Note that . This implies is bounded below for , and hence is defined.

The technical aspects of the proof of Theorem 1 revolve around ensuring that all of the required expectations and operations are well defined (without requiring any arbitrary convention regarding the definition of ). Modulo those details, the derivation of Eq. (6) is quite simple. As a first step, we show that Eq. (29) can be extended to certain unbounded . This is similar to results in [20, 8] but we will prove explicit conditions for which the expectations exist. To do this, we will need the following lemmas:

Lemma 1.

Let . Then one of the following holds

  1. is bounded below.

  2. The set is of the form or for some and is non-decreasing.

Proof.

Suppose is not bounded below. Take with . We know and so and hence . is convex so it we let then this implies .

To show is non-decreasing, suppose that we have with . Taking as above, find an such that and . is convex and so, letting , we have

(31)

This is a contradiction, hence is non-decreasing. ∎

Lemma 2.

Let and suppose is bounded below or . Then for all .

Remark 2.

We use the notation , , for the decomposition of a function into its positive and negative parts.

Proof.

Fix . If is bounded below then is bounded above and the result is trivial, so suppose not. Lemma 1 then implies that is non-decreasing. General properties of Legendre transforms on the real line imply that is convex, lower semicontinuous, and is continuous on . Hence there exists such that on and on (note that ). Define , so that , , and

(32)

Hence .

Now define . is bounded above and so and we can use Eq. (29) to find

(33)

We have pointwise, , and , so we can use the dominated convergence theorem to obtain

(34)

(here it was important that we are in the case where ). We also have , hence (recall we are in the case where is nondecreasing) and for large enough we have for all . is continuous on , hence so

(35)

Therefore the monotone convergence theorem implies , and so

(36)

We therefore conclude that . ∎

We can now prove that Eq. (29) can be extended to .

Theorem 2.

Let and suppose either is bounded below or . Then

(37)

where the objective functional is valued in .

Proof.

Lemma 2 implies for all and so the objective functional in Eq. (37) is valued in . If we can show for all then the claimed result will follow by using Eq. (29).

Fix . If or then the required bound is trivial, so suppose not. Then -a.s. We are in the case where , and so and -a.s. as well.

In summary, it suffices to show in the case where , , , . To do this, fix and define . and so Eq. (29) gives

(38)

We have pointwise and , therefore the dominated convergence theorem .

We have and is continuous on , therefore pointwise. We also have

(39)

hence the dominated convergence theorem implies . Combining these gives

(40)

and so

(41)

This proves the claim. ∎

We now prove Theorem 1 from the main text, which we restate below. For completeness, we also provide a derivation of the formula for the optimizer, , which was obtained in [20].

Theorem 3.

Let and suppose either is bounded below or . Then:

  1. Suppose , is , is strictly increasing, and one of the following holds:

    1. and if the value (resp. ) is achieved then