DeepAI

# Maximum Multiscale Entropy and Neural Network Regularization

A well-known result across information theory, machine learning, and statistical physics shows that the maximum entropy distribution under a mean constraint has an exponential form called the Gibbs-Boltzmann distribution. This is used for instance in density estimation or to achieve excess risk bounds derived from single-scale entropy regularizers (Xu-Raginsky '17). This paper investigates a generalization of these results to a multiscale setting. We present different ways of generalizing the maximum entropy result by incorporating the notion of scale. For different entropies and arbitrary scale transformations, it is shown that the distribution maximizing a multiscale entropy is characterized by a procedure which has an analogy to the renormalization group procedure in statistical physics. For the case of decimation transformation, it is further shown that this distribution is Gaussian whenever the optimal single-scale distribution is Gaussian. This is then applied to neural networks, and it is shown that in a teacher-student scenario, the multiscale Gibbs posterior can achieve a smaller excess risk than the single-scale Gibbs posterior.

• 3 publications
• 31 publications
06/26/2019

### Chaining Meets Chain Rule: Multilevel Entropic Regularization and Training of Neural Nets

We derive generalization and excess risk bounds for neural nets using a ...
06/06/2014

### Multiscale probability transformation of basic probability assignment

Decision making is still an open issue in the application of Dempster-Sh...
10/09/2022

### The Elliptical Quartic Exponential Distribution: An Annular Distribution Obtained via Maximum Entropy

This paper describes the Elliptical Quartic Exponential distribution in ...
04/07/2010

### On Tsallis Entropy Bias and Generalized Maximum Entropy Models

In density estimation task, maximum entropy model (Maxent) can effective...
12/29/2011

### Multi-q Analysis of Image Patterns

This paper studies the use of the Tsallis Entropy versus the classic Bol...
10/07/2021

### Physics-inspired analysis of the two-class income distribution in the USA in 1983-2018

The first part of this paper is a brief survey of the approaches to econ...
11/14/2004

### Statistical Mechanics Characterization of Neuronal Mosaics

The spatial distribution of neuronal cells is an important requirement f...

## 1 Introduction

Many real-world signals and physical systems have a large variety of length scales in their structures. Such multiscale structures have been studied and exploited on different fields and with different tools, such as in statistical mechanics, Gaussian processes, and signal processing among others. In this paper, we study distributions which maximize uncertainty at different scales simultaneously under a mean constraint. We are in particular motivated by the goal of analyzing neural networks and their generalization error while exploiting their multilevel characteristic.

The central notion studied in this paper, the multiscale entropy, is simply defined by taking a linear mixture of entropies of a system at different length scales. Hence, the multiscale entropy is a generalization of the classical entropy. For instance, if denote the layers of a neural network, and for two distributions and defined on the weight parameters, the multiscale relative entropy between and with index is given by:

 D(σ)(PW1…Wd∥QW1…Wd) =σ1D(PW1…Wd∥QW1…Wd)+σ2D(PW1…Wd−1∥QW1…Wd−1)+⋯+σdD(PW1∥QW1). (1)

Notice that is a linear mixture of entropies each at different scales of the network, where the scale in this example is coupled to the depth of the network. Also note that classical relative entropy corresponding to the whole system is a special case of when . In this paper, we study the optimization of such entropies under a mean constraint, namely:

 argminPW1…Wd:E[f(W1,…,Wd)]=μD(σ)(PW1…Wd∥QW1…Wd). (2)

We refer to (2) as the minimum multiscale relative entropy, and we sometimes also view this as part of a family of maximum multiscale entropy problems, where the ‘entropy’ corresponds in this case to . This is a generalization of the widely known maximum entropy problem. For example, it is a well-known result in information theory that the minimizing distribution of

 E[f(W)]+λD(PW∥QW), (3)

is the Gibbs–Boltzmann distribution , where is a fixed distribution, is distributed according to , is a measurable function called the energy function, and . This fact333Notice that this is the Lagrangian of the problem of minimizing relative entropy under the mean constraint . dates back to the work of Jaynes [1] on the maximum entropy inference, and was revisited444This was also generalized using different entropies such as Tallis entropy, Rényi entropy and others; see [2] and references therein. in a broader context by Csiszár under the property that exponential families achieve the I-projection over linear families [3]. This property has diverse important applications, such as in the celebrated papers on species distribution modeling by Phillips et al. [4]

and natural language processing by Berger et al.

[5], as well as in statistical mechanics [1, 6].

This result also has a concrete application in the context of statistical learning theory. The empirical risk can be written as

where , the output of the learning algorithm, depends on the sample distribution and

depends on the loss function. The generalization error of an hypothesis generated under the algorithm/channel

can then be bounded under mild assumptions using the KL-divergence in PAC-Bayes and mutual information-based bounds [7, 8]. Therefore, the minimum single-scale relative entropy problem in (3) becomes precisely an upper-bound on the population risk and its minimizer gives precisely the distribution under which one should sample from the hypothesis set to minimize this bound [9].

As mentioned, this paper studies multiscale versions of the previous two paragraphs. This is motivated by the multilevel nature of neural networks, where the set of all mappings between the input and each hidden layer is a refinement of the hypothesis set at different scales, each scale corresponding to the depth of the hidden layer. It has recently been shown in [10, 11] that the notion of scale can be employed to further exploit the closeness and similarity among the hypotheses of a learning model to tighten the generalization bounds from [8, 9]. This is achieved by replacing the mutual information bound with a sum of mutual informations that each consider the hypothesis set at different scales, as discussed in Section 5.

In the simplest case of two scales, this approach brings to light the following generalization of (3): finding the minimizing distribution of

 E [f(W1,W2)]+λ1D(PW1W2∥QW1W2)+λ2D(PW1∥QW1), (4)

where and . Notice how the regularizer has the form of a multiscale entropy as given in (1). Here we assume that

is a vector of random variables lying at the coarser scale, and

is a vector of the rest of the random variables. Therefore at the finer scale, we observe the total variables , and at the coarser scale, we only observe . Notice that this problem reduces to (3) when . However, when , we are amplifying the uncertainty at the coarser scale by taking it into account in both terms. We will next resolve this type of problem in a general context, relating the maximizing procedure to the renormalization group theory from statistical physics [12, 13], and describe applications to neural networks.

### 1.1 Contributions of this paper

1. We characterize the maximum multiscale distributions for arbitrary scale transformations and for entropies (discrete or continuous) in Theorem 1, as well as for arbitrary scale transformations and relative entropies in Theorem 2; we describe in particular how these are obtained from procedures that relate to the renormalization group in statistical physics.

2. We show in Theorem 3 that for the special case of decimation scale transformation, which relates to the multilevel structure of neural networks, the optimal multiscale distribution is a multivariate Gaussian whenever the optimal single-scale distribution is multivariate Gaussian; see Section 4. We then use this fact in our simulations in Section 5.2 (point IV below).

3. We demonstrate in Theorem 5 the tightness of the excess risk bound for the multiscale Gibbs posterior over the classical Gibbs excess risk bound [14], and provide an example in a teacher-student setting (i.e., data generated from a teacher network with smaller depth and learned by a deeper network) in Subsection 5.1.

4. We show in Subsection 5.2 how the multiscale Gibbs posterior encompasses both the classical Gibbs posterior and the random feature training as special cases, and provide a simulation showing how the multiscale version improves on the two special cases in the teacher-student setting.

### 1.2 Further relations with prior work

#### PAC-Bayes generalization bounds.

The generalization bound used in this paper has commonalities with PAC-Bayes bounds, first introduced by [15, 16], in that one first expresses prior knowledge by defining a prior distribution over the hypothesis class without assuming the truth of the prior. Then an information-theoretic ‘distance’ between the prior distribution and any posterior—with which a hypothesis is randomly chosen—appears in the generalization bound. However, unlike the generic PAC-Bayes bound of [16], our generalization bound is multiscale and uses the multiscale relative entropy rather than a single relative entropy for the whole hypothesis set. The motivation is to exploit the compositional structure of neural networks: instead of controlling the ‘complexity’ of the whole network at once, we simultaneously control the added complexities of each layer with respect to its previous layers (i.e., the interactions between the scales). As we show in Theorem 5, with this multiscale bound we can guarantee a tighter excess risk than single-scale bounds. Variants of PAC-Bayes bounds have later been studied in e.g. [17, 18, 19], and have been employed more specifically for neural networks in e.g. [20, 21, 22, 23], but again these bounds are not multiscale. The paper [24] combines PAC-Bayes bounds with generic chaining and obtains multiscale bounds that rely on auxiliary sample sets, however, an important difference between our generalization bound and [24] is that our bound puts forward the multiscale entropic regularization of the empirical risk, for which we can characterize the minimizer exactly.

#### Renormalization group and neural networks.

Connections between the renormalization group and neural networks have been pointed out in the seminal works [25, 26] and later in other papers such as [27, 28, 29, 30, 31]

. These works have mostly focused on applying current techniques in deep learning such as different types of gradient-based algorithms applied on various neural network architectures to problems in statistical mechanics. However, we are employing renormalization group transformation in the other way around, to help with inference with neural networks. Another important difference between our approach and

[26] is that these authors perform renormalization group transformations in the function space

and map the neurons to spins, whereas we perform renormalization group transformations in the

weight space

and map the synapses to spins. As a result, in our approach spin decimation does not mean that we ignore some neurons and waste their information. Rather, it simply means that we replace one layer of synapses between two consecutive hidden layers with a fixed

reference mapping (based on terminology of [11, Section 4]) such as the identity mapping for residual networks, as in Section 5.

#### Chaining.

Multiscale entropies implicitly appear in the classical chaining technique of high-dimensional probability. For example, one can rewrite Dudley inequality

[32] variationally to remove the square root function over the metric entropies and transform the bound into a linear mixture of metric entropies at multiple scales. This is also the case for the information-theoretic extension of chaining with mutual information [10] which our generalization bound is based upon in Section 5; see [11].

#### Approximate Bayesian inference for neural networks.

The recent paper [33]

also studies approximate Bayesian inference for neural networks using Gaussian approximations to the posterior distribution, as we similarly do in Subsection

5.2 based on Gaussian results of Section 4. However, unlike our approach, their analysis is not multiscale and treats the whole neural network as a single block.

#### Phase-space complexity.

Between the definition of multiscale entropy in this paper and what papers [34, 35] refer to as “phase space complexity” in statistical mechanics, there exist notional similarities. Characterizing maximum phase space complexity distributions was left as an open question and conjectured to be related to the renormalization group in [35].

## 2 Multiscale entropies

Assume that is the state of a system or is data, which can be either a random variable or a random vector. We first give the definition of different entropies.

###### Definition 1 (Entropy555All entropies in this paper are in nats.).

The Shannon entropy of a distribution defined on a set is if is discrete. The differential entropy of is defined as if is continuous. The relative entropy between distributions and defined on the same set is if is discrete, and if is continuous.

Next, we blend the notions of scale and entropy as follows: Let and given a sequence of scale transformations assume that for all . We define to be the scale version of the random vector . For a vector of non-negative reals , let denote the length coefficient at scale .

###### Example 1.

In a wavelet theory context, assume that represents the vector of pixels of an image, and each transformation takes the average value of all neighboring pixels and for each group outputs a single pixel with that average value, thus resulting in an lower resolution image. Hence, here are respectively lower and lower resolution versions of .

For any given , , and , we define the multiscale entropies as follows:

###### Definition 2 (Multiscale entropy).

The multiscale Shannon entropy is defined as

 H(σ,T)(W)≜d∑i=1σiH(W(i)).

Let the multiscale differential entropy be

 h(σ,T)(W)≜d∑i=1σih(W(i)).

We define the multiscale relative entropy between distributions and as

 D(σ,T)(PW∥QW)≜d∑i=1σiD(PW(i)∥QW(i)).

Notice that multiscale entropy encompasses classical entropy as a special case: it suffices to choose to get and similarly for the Shannon and differential entropies. However, by taking positive values for , , we are emphasizing the entropy at coarser scales. Next, we focus on a special case of scale transformations called decimation which relates with the multilevel structure of neural networks:666Multiscale relative entropy with decimation transformation is named “multilevel relative entropy” in [11]. Assume that and let denote a random vector partitioned into vectors. For example, can denote the synaptic weights of a neural network divided into its layers. For all , define . Notice that and larger gives more random variables in the vector that we stop observing in . Therefore the scale transformations simply eliminate the layers one-by-one. In the theoretical physics literature, spin decimation is a type of scale transformation introduced by Kadanoff and Houghton [36] and Migdal [37].

###### Example 2.

The decimation transformation is what is typically used in maps of cities of a certain region. As one zooms out of the region and views the map of the region at larger scales, one omits the smaller cities in the resulting maps.

## 3 Maximum multiscale entropy

In this section we derive maximum multiscale entropy distributions for different entropies. The key ingredient of the proofs of all derivations is the chain rule of relative entropy.

### 3.1 Multiscale Shannon and differential entropy maximization

Let be an arbitrary measurable function called the energy. Consider the problem of maximizing Shannon entropy under a mean constraint:

 argmaxPW:E[f(W)]=μH(W). (5)

For this, one solves for the maximizing distribution of the Lagrangian , which by a well known result due to [1] (see Lemma 2 in the Appendix), is given by the Gibbs–Boltzmann distribution . Now, consider the following multiscale generalization of the previous problem, that is given , , and , solving for

 argmaxPW:E[f(W)]=μH(σ,T)(W).

Notice that this problem reduces to (5) in the special case when . But for more general choices for the values of , the uncertainty at the coarser scales are emphasized. We form the Lagrangian as follows and define the unconstrained maximization problem:

 P∗W≜argmaxPW{H(σ,T)(W)−λE[f(W)]}. (6)

In the following, for any , we find the maximizing distribution . First, we require the definition of scaled distribution

, which is basically raising a probability distribution to a power:

###### Definition 3 (Scaled distribution777This is also known as escort distribution in statistical physics literature; see e.g. [38].).

Let . If is a distribution on a discrete set , then for all , we define the scaled distribution with

 (P)θ(a)=(P(a))θ∑x∈A(P(x))θ.

For analog random variables it is defined analogously except by replacing the sum in the denominator with an integral.

Define the Gibbs distribution at the finest (microscopic) scale as

 PGibbsW(w)≜exp(−λf(w)σ1)Q(w)∑wexp(−λf(w)σ1)Q(w), (7)

which would be the minimizer of (6) had we had . Algorithm 1 receives the microscopic Gibbs distribution and the values of as its input, and outputs the desired distribution —the maximizer of (6).

For continuous (analog) random variables we replace multiscale Shannon entropy with multiscale differential entropy and consider the following problem: We then form the Lagrangian and define

 P∗W≜argmaxPW{h(σ,T)(W)−λE[f(W)]}. (8)

Similarly, Algorithm 1 outputs , the maximizer of (8

), except now one should define the initial microscopic Gibbs distribution for continuous random variables as

 PGibbsW(w)≜exp(−λf(w)σ1)Q(w)∫wexp(−λf(w)σ1)Q(w)dw.

We prove the following theorem in the Appendix:

###### Theorem 1.

The solutions to the maximization problems (6) and (8) are unique and are outputs of Algorithm 1.

Notice that Algorithm 1

consists of three phases: (I) The initialization with a Gibbs distribution at line 1. (II) A ‘renormalization group’ phase at lines 2–4 in which the degrees of freedom are eliminated one by one to obtain the intermediate distributions

for all scales in an increasing (coarsening) order. (III) A refinement phase at line 5, in which the desired distribution is obtained by concatenating the intermediate distributions along the decreasing (refining) order by conditional distributions. As we shall see in the next subsection, Algorithms 2 and 3 also have a similar structure, though the renormalization step will be replaced by Bayesian renormalization. This, in turn, will introduce a Bayesian variant of the renormalization group.

### 3.2 Multiscale relative entropy minimization

Let be an arbitrary measurable function called the energy. For any , and a fixed prior distribution , as mentioned in Section 1, if

 ˆPW(w)≜argminPW{E[f(W)]+λD(PW∥QW)},

where , then is the Gibbs–Boltzmann distribution. Now, consider the following multiscale generalization of the previous problem:

 P⋆W≜argminPW{E[f(W)]+λD(σ,T)(PW∥QW)}. (9)

Notice that this is the Lagrangian of the problem of minimizing multiscale relative entropy under the mean constraint . It was shown in [11] that (9), in the special case of decimation transformation, also has a unique minimizer which can be characterized with the proposed Marginalize-Tilt (MT) algorithm, restated here as Algorithm 3. In this paper, we show that for arbitrary scale transformations, the solution to (9) is unique and given with Algorithm 2—a more general version of the MT Algorithm. First, the definition of tilted distribution

is required which is basically the geometric mean between two distributions:

###### Definition 4 (Tilted distribution888This is also known as generalized escort distribution in statistical physics literature; see e.g. [38].).

Let . If and are distributions on a discrete set , then for all , we define the tilted distribution with

 (P,Q)θ(a)=Pθ(a)Q1−θ(a)∑x∈APθ(x)Q1−θ(x).

For continuous random variables it is defined analogously except by replacing the sum in the denominator with an integral.

Define the Gibbs distribution at the finest (microscopic) scale as

 PGibbsW(w)≜exp(−f(w)λσ1)Q(w)∫wexp(−f(w)λσ1)Q(w)dw, (10)

which would be the minimizer of (9) had we had . Algorithm 2 receives the microscopic Gibbs distribution, the prior distribution , and the values of as its input and outputs the desired multiscale Gibbs distribution —the minimizer of (9).

###### Theorem 2.

The solution to the maximization problem (9) is unique and is the output of Algorithm 2. For the special case of decimation transformation, Algorithm 2 reduces to Algorithm 3.

For a proof, see the Appendix. As per [11], we call the temperature vector of .

### 3.3 Multi-objective optimization viewpoint

Notice that when maximizing multiscale entropy under a mean constraint, for different values of length scale coefficients we are finding the set of Pareto optimal points of the multi-objective optimization with the entropies at different scales as the objective functions. Therefore maximum multiscale entropy can also be interpreted as a linear scalarization of a multi-objective optimization problem (see e.g. [39] for a definition of linear scalarization). Thus, roughly speaking, maximum multiscale entropy distributions maximize entropies at multiple scales simultaneously.

## 4 Maximum multiscale entropy and multivariate Gaussians

Here, we show that the MT algorithm is closed on the family of multivariate Gaussian distributions. We also show that the same fact holds for Algorithm

1 in the special case of decimation transformation.

###### Theorem 3.

Assume that the microscopic Gibbs distribution is multivariate Gaussian. Then for decimation transformation, the output of Algorithm 1 is multivariate Gaussian as well. Furthermore, if the prior is also multivariate Gaussian, then so is the output of the MT algorithm. In these cases, these algorithms simplify to parameter computations of multivariate Gaussians.

For a precise proof, see the Appendix. A proof sketch is as follows: Based on a well-known property of multivariate Gaussians, marginalizing out some of its random variables keeps the distribution as multivariate Gaussian. Also, scaling a Gaussian or tilting it towards another Gaussian keeps the resultant distribution as Gaussian. Therefore, the renormalization group phase of Algorithms 1 and 3 keep all the distributions as Gaussians. Hence, all the intermediate distributions are multivariate Gaussians. The proof is complete by repeatedly applying the following proposition in the refinement phase, which states that concatenating two Gaussians with conditional distribution results in another Gaussian distribution:

###### Proposition 1 (Gaussian concatenation).

Let and be multivariate Gaussian distributions. Then is multivariate Gaussian as well.

Proposition 1 may not be new, however, we were not able to find it in the literature; for a precise form and proof, see the Appendix. Note that when the energy function is a definite quadratic function , where is a positive definite matrix, and the prior is multivariate Gaussian, then based on its definition in Subsection 3.2, the microscopic Gibbs distribution is a multivariate Gaussian distribution. Hence, based on the previous argument, the multiscale Gibbs distribution is multivariate Gaussian as well.

## 5 Multiscale entropic regularization of neural networks

Let

denote the hyperbolic tangent activation function. Consider a

layer feedforward (residual) neural network with parameters where for all , , and given input , the relations between the hidden layers and the output layer are as follows: Let be the squared loss, that is, for the network with parameters and for any example , we have . The following assumption is adopted from [11], named as multilevel regularization: for all , which is similarly used in [40]. Let

denote the training set in supervised learning. For any

, let denote the statistical (or population) risk of hypothesis , where . For a given training set , the empirical risk of hypothesis is defined as The following lemma controls the difference between consecutive hidden layers of the neural network:

###### Lemma 1.

For any and all ,

 |hi(x)−hi−1(x)|2≤e|x|2d.
###### Proof.

Since , based on induction on and the triangle inequality, we have Therefore

 |hi(x)−hi−1(x)|2 =|σ(Wihi−1)|2 ≤|Wihi−1|2 ≤∥Wi∥2|hi−1|2 ≤e|x|2d.

We assume that the instances have bounded norm, namely, . Based on Lemma 1 and a similar technique to [11], we can obtain the following multiscale entropic generalization bound:

###### Theorem 4.

Let . We have the following generalization bound, where is a constant, a vector of positive reals, and a prior distribution:

 E[Lμ(W)]≤E[LS(W)]+Cd√ninfγ,QWd∑i=1(γiD(PW1…Wd−i+1|S∥∥QW1…Wd−i+1∣∣PS)+14γi).

See the Appendix for a proof. For fixed and , and any , let

 P⋆W|S=zn≜argminPW{E[Lzn(W)]+d∑i=1σiD(PW1…Wd−i+1∥∥QW1…Wd−i+1)}, (11)

where for all . Note that (11) has the same form as (9) with the decimation transformation with , therefore we can use the MT algorithm to obtain for any . To obtain excess risk bounds from the generalization bound of Theorem 4, we employ a technique from [9]: Since minimizes the expression in (11), one can obtain excess risk bounds by plugging in a fixed distribution concentrated around a population risk minimizer and independent from . We now can state the main result of this section.

###### Theorem 5.

Define the data processing gain at scale by

 DPG(i)≜√D(ˆQW∥QW)−√D(ˆQW(i)∥QW(i)).

Then the difference between the excess risk bounds of the single-scale Gibbs posterior and the multiscale Gibbs posterior, when each are optimized over their hyper-parameter (temperature) values, is equal to and is positive.

Hence we can always guarantee a tighter excess risk for the multiscale Gibbs posterior than for the single-scale Gibbs posterior. For example, if the weights of the network take discrete values, then we can take to be the Dirac delta measure on . In this case, for any prior distribution , there exists such that

 E[Lμ(W)]−infw∈WLμ(w)≤Cd√nd∑i=1√log1QW1…Wi(ˆw1,…,ˆwi).

However, the excess risk bound for the single-scale Gibbs distribution when optimized over its temperature parameter is

 E[Lμ(W)]−infw∈WLμ(w)≤C√n√log1QW1…Wd(ˆw1,…,ˆwd).

The difference between the right sides of these bounds is given by Theorem 5. For a precise proof of Theorem 5 and an example when the synaptic weights take continuous values, see the Appendix.

### 5.1 Teacher-Student example

A teacher-student scenario, first studied in [41], has the advantage of facilitating the evaluation of . Let data be generated from a teacher residual network with depth , where . This is equivalent to a depth teacher network with identity mappings at the first layers. Hence , and we choose as the weights of the teacher network. Assume an i.i.d. Gaussian prior centered at zero. Hence and assume where . We show in the Appendix that which quantifies the improvement gap.

### 5.2 Experiment

Assume that the temperature vector of the multiscale Gibbs posterior is such that takes arbitrary positive values and the rest of the parameters are determined inductively with the following equations: for all Hence, the tilting indices in the MT algorithm are all equal to and we can represent the temperature vector with just two positive parameters . Notice that when , then the multiscale Gibbs distribution is simply equivalent to the single-scale Gibbs distribution. Moreover, the case corresponds to sampling the first layers randomly from the prior distribution, and only training the last layer, a condition similar to random feature learning [42]. In the following experiment, assume that we have a teacher network and a student network. For different values of , we minimize the performance of the algorithm over different values of the temperature . We use the Gauss–Newton matrix at the origin to obtain Gaussian approximations to the microscopic Gibbs distribution, then use Theorem 3. See Figure 1. Notice that there exist intermediate values for for which the population risk is much better than extreme cases of and . For more details, see the Appendix.

## Acknowledgement

Amir Asadi thanks Siddhartha Sarkar for useful discussions on renormalization group theory.

## Appendix A Proofs for Section 3

Here, we present the proofs of maximum multiscale entropy results.

### a.1 Multiscale Shannon and differential entropy maximization

For the proof of Theorem 1 we first require the following lemmas. The first lemma is used for proving the optimality of the Gibbs distribution for maximizing Shannon entropy:

###### Lemma 2.

Let be such that , where is a finite or countably infinite set. Then for any defined on ,

 H(W)−λE[f(W)]=−D(PW∥∥PGibbsW)+log(∑w∈Aexp(−λf(w))),

where

 PGibbsW(w)≜exp(−λf(w))∑w∈Aexp(−λf(w)),w∈A,

is the Gibbs–Boltzmann distribution.

###### Proof.
 H(W)−λE[f(W)] =−∑w∈AP(w)logP(w)−λ∑w∈Af(w)P(w) =−∑w∈AP(w)logP(w)exp(−λf(w)) (12) =−∑w∈AP(w)logP(w)exp(−λf(w))∑w∈Aexp(−λf(w))+log(∑w∈Aexp(−λf(w))) (13) =−D(PW∥∥PGibbsW)+log(∑w∈Aexp(−λf(w))). (14)

As a corollary of Lemma 2, the maximizer of is given by the Gibbs distribution . The counterpart of Lemma 2 for continuous random variables and differential entropy is as follows:

###### Lemma 3.

Let be such that , where is an uncountable set. Then for any defined on ,

 h(W)−λE[f(W)]=−D(PW∥∥PGibbsW)+log(∫w∈Aexp(−λf(w))dw),

where

 PGibbsW(w)≜exp(−λf(w))∫w∈Aexp(−λf(w))dw,w∈A,

is the Gibbs–Boltzmann distribution.

###### Proof.
 h(W)−λE[f(W)] =−∫w∈AP(w)logP(w)dw−λ∫w∈Af(w)P(w)dw =−∫w∈AP(w)logP(w)exp(−λf(w))dw (15) =−∫w∈AP(w)logP(w)exp(−λf(w))∫w∈Aexp(−λf(w))dw+log(∫w∈Aexp(−λf(w))) (16) =−D(PW∥∥PGibbsW)+log(∫w∈Aexp(−λf(w))). (17)

Let denote the Rényi entropy of order of discrete distribution , which for is defined as

 Hα(P)≜11−αlog∑w∈APα(w).

Similarly, let denote the Rényi differential entropy of order of continuous distribution , which for is defined as

 Hα(P)≜11−αlog∫w∈APα(w)dw.

The following two lemmas show how to linearly combine an entropy with a relative entropy, using scaled distributions:

###### Lemma 4.

Let and be two discrete distributions and . We have

 H(P)−θD(P∥Q)=Hθ1+θ(Q)−(1+θ)D(P∥∥(Q)θ1+θ).
###### Proof.
 H(P)−θD(P∥Q) =−∑P(w)logP(w)−θ∑P(w)logP(w)Q(w) =−∑P(w)logP(w)1+θQ(w)θ =−(1+θ)∑P(w)logP(w)Q(w)θ1+θ =Hθ1+θ(Q)−(1+θ)D(P∥∥(Q)θ1+θ).

###### Lemma 5.

Let and be two continuous distributions and . Then

 h(P)−θD(P∥Q)=hθ1+θ(Q)−(1+θ)D(P∥∥(Q)θ1+θ).
###### Proof.
 h(P)−θD(P∥Q) =−∫P(w)logP(w)dw−θ∫P(w)logP(w)Q(w)dw =−∫P(w)logP(w)1+θQ(w)θdw =−(1+θ)∫P(w)logP(w)Q(w)θ1+θdw =hθ1+θ(Q)−(1+θ)D(P∥∥(Q)θ1+θ).

For simplicity of the proofs, we assume that all alphabets are standard Borel spaces, which guarantees the existence of regular conditional probabilities and reverse random transformations. Therefore, as a corollary of the chain rule of relative entropy, we have the following:

Let and