Many real-world signals and physical systems have a large variety of length scales in their structures. Such multiscale structures have been studied and exploited on different fields and with different tools, such as in statistical mechanics, Gaussian processes, and signal processing among others. In this paper, we study distributions which maximize uncertainty at different scales simultaneously under a mean constraint. We are in particular motivated by the goal of analyzing neural networks and their generalization error while exploiting their multilevel characteristic.
The central notion studied in this paper, the multiscale entropy, is simply defined by taking a linear mixture of entropies of a system at different length scales. Hence, the multiscale entropy is a generalization of the classical entropy. For instance, if denote the layers of a neural network, and for two distributions and defined on the weight parameters, the multiscale relative entropy between and with index is given by:
Notice that is a linear mixture of entropies each at different scales of the network, where the scale in this example is coupled to the depth of the network. Also note that classical relative entropy corresponding to the whole system is a special case of when . In this paper, we study the optimization of such entropies under a mean constraint, namely:
We refer to (2) as the minimum multiscale relative entropy, and we sometimes also view this as part of a family of maximum multiscale entropy problems, where the ‘entropy’ corresponds in this case to . This is a generalization of the widely known maximum entropy problem. For example, it is a well-known result in information theory that the minimizing distribution of
is the Gibbs–Boltzmann distribution , where is a fixed distribution, is distributed according to , is a measurable function called the energy function, and . This fact333Notice that this is the Lagrangian of the problem of minimizing relative entropy under the mean constraint . dates back to the work of Jaynes  on the maximum entropy inference, and was revisited444This was also generalized using different entropies such as Tallis entropy, Rényi entropy and others; see  and references therein. in a broader context by Csiszár under the property that exponential families achieve the I-projection over linear families . This property has diverse important applications, such as in the celebrated papers on species distribution modeling by Phillips et al. 
and natural language processing by Berger et al., as well as in statistical mechanics [1, 6].
This result also has a concrete application in the context of statistical learning theory. The empirical risk can be written aswhere , the output of the learning algorithm, depends on the sample distribution and
depends on the loss function. The generalization error of an hypothesis generated under the algorithm/channelcan then be bounded under mild assumptions using the KL-divergence in PAC-Bayes and mutual information-based bounds [7, 8]. Therefore, the minimum single-scale relative entropy problem in (3) becomes precisely an upper-bound on the population risk and its minimizer gives precisely the distribution under which one should sample from the hypothesis set to minimize this bound .
As mentioned, this paper studies multiscale versions of the previous two paragraphs. This is motivated by the multilevel nature of neural networks, where the set of all mappings between the input and each hidden layer is a refinement of the hypothesis set at different scales, each scale corresponding to the depth of the hidden layer. It has recently been shown in [10, 11] that the notion of scale can be employed to further exploit the closeness and similarity among the hypotheses of a learning model to tighten the generalization bounds from [8, 9]. This is achieved by replacing the mutual information bound with a sum of mutual informations that each consider the hypothesis set at different scales, as discussed in Section 5.
In the simplest case of two scales, this approach brings to light the following generalization of (3): finding the minimizing distribution of
where and . Notice how the regularizer has the form of a multiscale entropy as given in (1). Here we assume thatis a vector of the rest of the random variables. Therefore at the finer scale, we observe the total variables , and at the coarser scale, we only observe . Notice that this problem reduces to (3) when . However, when , we are amplifying the uncertainty at the coarser scale by taking it into account in both terms. We will next resolve this type of problem in a general context, relating the maximizing procedure to the renormalization group theory from statistical physics [12, 13], and describe applications to neural networks.
1.1 Contributions of this paper
We characterize the maximum multiscale distributions for arbitrary scale transformations and for entropies (discrete or continuous) in Theorem 1, as well as for arbitrary scale transformations and relative entropies in Theorem 2; we describe in particular how these are obtained from procedures that relate to the renormalization group in statistical physics.
We show in Theorem 3 that for the special case of decimation scale transformation, which relates to the multilevel structure of neural networks, the optimal multiscale distribution is a multivariate Gaussian whenever the optimal single-scale distribution is multivariate Gaussian; see Section 4. We then use this fact in our simulations in Section 5.2 (point IV below).
We demonstrate in Theorem 5 the tightness of the excess risk bound for the multiscale Gibbs posterior over the classical Gibbs excess risk bound , and provide an example in a teacher-student setting (i.e., data generated from a teacher network with smaller depth and learned by a deeper network) in Subsection 5.1.
We show in Subsection 5.2 how the multiscale Gibbs posterior encompasses both the classical Gibbs posterior and the random feature training as special cases, and provide a simulation showing how the multiscale version improves on the two special cases in the teacher-student setting.
1.2 Further relations with prior work
PAC-Bayes generalization bounds.
The generalization bound used in this paper has commonalities with PAC-Bayes bounds, first introduced by [15, 16], in that one first expresses prior knowledge by defining a prior distribution over the hypothesis class without assuming the truth of the prior. Then an information-theoretic ‘distance’ between the prior distribution and any posterior—with which a hypothesis is randomly chosen—appears in the generalization bound. However, unlike the generic PAC-Bayes bound of , our generalization bound is multiscale and uses the multiscale relative entropy rather than a single relative entropy for the whole hypothesis set. The motivation is to exploit the compositional structure of neural networks: instead of controlling the ‘complexity’ of the whole network at once, we simultaneously control the added complexities of each layer with respect to its previous layers (i.e., the interactions between the scales). As we show in Theorem 5, with this multiscale bound we can guarantee a tighter excess risk than single-scale bounds. Variants of PAC-Bayes bounds have later been studied in e.g. [17, 18, 19], and have been employed more specifically for neural networks in e.g. [20, 21, 22, 23], but again these bounds are not multiscale. The paper  combines PAC-Bayes bounds with generic chaining and obtains multiscale bounds that rely on auxiliary sample sets, however, an important difference between our generalization bound and  is that our bound puts forward the multiscale entropic regularization of the empirical risk, for which we can characterize the minimizer exactly.
Renormalization group and neural networks.
. These works have mostly focused on applying current techniques in deep learning such as different types of gradient-based algorithms applied on various neural network architectures to problems in statistical mechanics. However, we are employing renormalization group transformation in the other way around, to help with inference with neural networks. Another important difference between our approach and is that these authors perform renormalization group transformations in the function space
and map the neurons to spins, whereas we perform renormalization group transformations in theweight space
and map the synapses to spins. As a result, in our approach spin decimation does not mean that we ignore some neurons and waste their information. Rather, it simply means that we replace one layer of synapses between two consecutive hidden layers with a fixedreference mapping (based on terminology of [11, Section 4]) such as the identity mapping for residual networks, as in Section 5.
Multiscale entropies implicitly appear in the classical chaining technique of high-dimensional probability. For example, one can rewrite Dudley inequality variationally to remove the square root function over the metric entropies and transform the bound into a linear mixture of metric entropies at multiple scales. This is also the case for the information-theoretic extension of chaining with mutual information  which our generalization bound is based upon in Section 5; see .
Approximate Bayesian inference for neural networks.
The recent paper 
also studies approximate Bayesian inference for neural networks using Gaussian approximations to the posterior distribution, as we similarly do in Subsection5.2 based on Gaussian results of Section 4. However, unlike our approach, their analysis is not multiscale and treats the whole neural network as a single block.
Between the definition of multiscale entropy in this paper and what papers [34, 35] refer to as “phase space complexity” in statistical mechanics, there exist notional similarities. Characterizing maximum phase space complexity distributions was left as an open question and conjectured to be related to the renormalization group in .
2 Multiscale entropies
Assume that is the state of a system or is data, which can be either a random variable or a random vector. We first give the definition of different entropies.
Definition 1 (Entropy555All entropies in this paper are in nats.).
The Shannon entropy of a distribution defined on a set is if is discrete. The differential entropy of is defined as if is continuous. The relative entropy between distributions and defined on the same set is if is discrete, and if is continuous.
Next, we blend the notions of scale and entropy as follows: Let and given a sequence of scale transformations assume that for all . We define to be the scale version of the random vector . For a vector of non-negative reals , let denote the length coefficient at scale .
In a wavelet theory context, assume that represents the vector of pixels of an image, and each transformation takes the average value of all neighboring pixels and for each group outputs a single pixel with that average value, thus resulting in an lower resolution image. Hence, here are respectively lower and lower resolution versions of .
For any given , , and , we define the multiscale entropies as follows:
Definition 2 (Multiscale entropy).
The multiscale Shannon entropy is defined as
Let the multiscale differential entropy be
We define the multiscale relative entropy between distributions and as
Notice that multiscale entropy encompasses classical entropy as a special case: it suffices to choose to get and similarly for the Shannon and differential entropies. However, by taking positive values for , , we are emphasizing the entropy at coarser scales. Next, we focus on a special case of scale transformations called decimation which relates with the multilevel structure of neural networks:666Multiscale relative entropy with decimation transformation is named “multilevel relative entropy” in . Assume that and let denote a random vector partitioned into vectors. For example, can denote the synaptic weights of a neural network divided into its layers. For all , define . Notice that and larger gives more random variables in the vector that we stop observing in . Therefore the scale transformations simply eliminate the layers one-by-one. In the theoretical physics literature, spin decimation is a type of scale transformation introduced by Kadanoff and Houghton  and Migdal .
The decimation transformation is what is typically used in maps of cities of a certain region. As one zooms out of the region and views the map of the region at larger scales, one omits the smaller cities in the resulting maps.
3 Maximum multiscale entropy
In this section we derive maximum multiscale entropy distributions for different entropies. The key ingredient of the proofs of all derivations is the chain rule of relative entropy.
3.1 Multiscale Shannon and differential entropy maximization
Let be an arbitrary measurable function called the energy. Consider the problem of maximizing Shannon entropy under a mean constraint:
For this, one solves for the maximizing distribution of the Lagrangian , which by a well known result due to  (see Lemma 2 in the Appendix), is given by the Gibbs–Boltzmann distribution . Now, consider the following multiscale generalization of the previous problem, that is given , , and , solving for
Notice that this problem reduces to (5) in the special case when . But for more general choices for the values of , the uncertainty at the coarser scales are emphasized. We form the Lagrangian as follows and define the unconstrained maximization problem:
In the following, for any , we find the maximizing distribution . First, we require the definition of scaled distribution
, which is basically raising a probability distribution to a power:
Definition 3 (Scaled distribution777This is also known as escort distribution in statistical physics literature; see e.g. .).
Let . If is a distribution on a discrete set , then for all , we define the scaled distribution with
For analog random variables it is defined analogously except by replacing the sum in the denominator with an integral.
Define the Gibbs distribution at the finest (microscopic) scale as
For continuous (analog) random variables we replace multiscale Shannon entropy with multiscale differential entropy and consider the following problem: We then form the Lagrangian and define
), except now one should define the initial microscopic Gibbs distribution for continuous random variables as
We prove the following theorem in the Appendix:
Notice that Algorithm 1
consists of three phases: (I) The initialization with a Gibbs distribution at line 1. (II) A ‘renormalization group’ phase at lines 2–4 in which the degrees of freedom are eliminated one by one to obtain the intermediate distributionsfor all scales in an increasing (coarsening) order. (III) A refinement phase at line 5, in which the desired distribution is obtained by concatenating the intermediate distributions along the decreasing (refining) order by conditional distributions. As we shall see in the next subsection, Algorithms 2 and 3 also have a similar structure, though the renormalization step will be replaced by Bayesian renormalization. This, in turn, will introduce a Bayesian variant of the renormalization group.
3.2 Multiscale relative entropy minimization
Let be an arbitrary measurable function called the energy. For any , and a fixed prior distribution , as mentioned in Section 1, if
where , then is the Gibbs–Boltzmann distribution. Now, consider the following multiscale generalization of the previous problem:
Notice that this is the Lagrangian of the problem of minimizing multiscale relative entropy under the mean constraint . It was shown in  that (9), in the special case of decimation transformation, also has a unique minimizer which can be characterized with the proposed Marginalize-Tilt (MT) algorithm, restated here as Algorithm 3. In this paper, we show that for arbitrary scale transformations, the solution to (9) is unique and given with Algorithm 2—a more general version of the MT Algorithm. First, the definition of tilted distribution
is required which is basically the geometric mean between two distributions:
Definition 4 (Tilted distribution888This is also known as generalized escort distribution in statistical physics literature; see e.g. .).
Let . If and are distributions on a discrete set , then for all , we define the tilted distribution with
For continuous random variables it is defined analogously except by replacing the sum in the denominator with an integral.
Define the Gibbs distribution at the finest (microscopic) scale as
which would be the minimizer of (9) had we had . Algorithm 2 receives the microscopic Gibbs distribution, the prior distribution , and the values of as its input and outputs the desired multiscale Gibbs distribution —the minimizer of (9).
For a proof, see the Appendix. As per , we call the temperature vector of .
3.3 Multi-objective optimization viewpoint
Notice that when maximizing multiscale entropy under a mean constraint, for different values of length scale coefficients we are finding the set of Pareto optimal points of the multi-objective optimization with the entropies at different scales as the objective functions. Therefore maximum multiscale entropy can also be interpreted as a linear scalarization of a multi-objective optimization problem (see e.g.  for a definition of linear scalarization). Thus, roughly speaking, maximum multiscale entropy distributions maximize entropies at multiple scales simultaneously.
4 Maximum multiscale entropy and multivariate Gaussians
Here, we show that the MT algorithm is closed on the family of multivariate Gaussian distributions. We also show that the same fact holds for Algorithm1 in the special case of decimation transformation.
Assume that the microscopic Gibbs distribution is multivariate Gaussian. Then for decimation transformation, the output of Algorithm 1 is multivariate Gaussian as well. Furthermore, if the prior is also multivariate Gaussian, then so is the output of the MT algorithm. In these cases, these algorithms simplify to parameter computations of multivariate Gaussians.
For a precise proof, see the Appendix. A proof sketch is as follows: Based on a well-known property of multivariate Gaussians, marginalizing out some of its random variables keeps the distribution as multivariate Gaussian. Also, scaling a Gaussian or tilting it towards another Gaussian keeps the resultant distribution as Gaussian. Therefore, the renormalization group phase of Algorithms 1 and 3 keep all the distributions as Gaussians. Hence, all the intermediate distributions are multivariate Gaussians. The proof is complete by repeatedly applying the following proposition in the refinement phase, which states that concatenating two Gaussians with conditional distribution results in another Gaussian distribution:
Proposition 1 (Gaussian concatenation).
Let and be multivariate Gaussian distributions. Then is multivariate Gaussian as well.
Proposition 1 may not be new, however, we were not able to find it in the literature; for a precise form and proof, see the Appendix. Note that when the energy function is a definite quadratic function , where is a positive definite matrix, and the prior is multivariate Gaussian, then based on its definition in Subsection 3.2, the microscopic Gibbs distribution is a multivariate Gaussian distribution. Hence, based on the previous argument, the multiscale Gibbs distribution is multivariate Gaussian as well.
5 Multiscale entropic regularization of neural networks
denote the hyperbolic tangent activation function. Consider alayer feedforward (residual) neural network with parameters where for all , , and given input , the relations between the hidden layers and the output layer are as follows: Let be the squared loss, that is, for the network with parameters and for any example , we have . The following assumption is adopted from , named as multilevel regularization: for all , which is similarly used in . Let
denote the training set in supervised learning. For any, let denote the statistical (or population) risk of hypothesis , where . For a given training set , the empirical risk of hypothesis is defined as The following lemma controls the difference between consecutive hidden layers of the neural network:
For any and all ,
Since , based on induction on and the triangle inequality, we have Therefore
Let . We have the following generalization bound, where is a constant, a vector of positive reals, and a prior distribution:
See the Appendix for a proof. For fixed and , and any , let
where for all . Note that (11) has the same form as (9) with the decimation transformation with , therefore we can use the MT algorithm to obtain for any . To obtain excess risk bounds from the generalization bound of Theorem 4, we employ a technique from : Since minimizes the expression in (11), one can obtain excess risk bounds by plugging in a fixed distribution concentrated around a population risk minimizer and independent from . We now can state the main result of this section.
Define the data processing gain at scale by
Then the difference between the excess risk bounds of the single-scale Gibbs posterior and the multiscale Gibbs posterior, when each are optimized over their hyper-parameter (temperature) values, is equal to and is positive.
Hence we can always guarantee a tighter excess risk for the multiscale Gibbs posterior than for the single-scale Gibbs posterior. For example, if the weights of the network take discrete values, then we can take to be the Dirac delta measure on . In this case, for any prior distribution , there exists such that
However, the excess risk bound for the single-scale Gibbs distribution when optimized over its temperature parameter is
5.1 Teacher-Student example
A teacher-student scenario, first studied in , has the advantage of facilitating the evaluation of . Let data be generated from a teacher residual network with depth , where . This is equivalent to a depth teacher network with identity mappings at the first layers. Hence , and we choose as the weights of the teacher network. Assume an i.i.d. Gaussian prior centered at zero. Hence and assume where . We show in the Appendix that which quantifies the improvement gap.
Assume that the temperature vector of the multiscale Gibbs posterior is such that takes arbitrary positive values and the rest of the parameters are determined inductively with the following equations: for all Hence, the tilting indices in the MT algorithm are all equal to and we can represent the temperature vector with just two positive parameters . Notice that when , then the multiscale Gibbs distribution is simply equivalent to the single-scale Gibbs distribution. Moreover, the case corresponds to sampling the first layers randomly from the prior distribution, and only training the last layer, a condition similar to random feature learning . In the following experiment, assume that we have a teacher network and a student network. For different values of , we minimize the performance of the algorithm over different values of the temperature . We use the Gauss–Newton matrix at the origin to obtain Gaussian approximations to the microscopic Gibbs distribution, then use Theorem 3. See Figure 1. Notice that there exist intermediate values for for which the population risk is much better than extreme cases of and . For more details, see the Appendix.
Amir Asadi thanks Siddhartha Sarkar for useful discussions on renormalization group theory.
Appendix A Proofs for Section 3
Here, we present the proofs of maximum multiscale entropy results.
a.1 Multiscale Shannon and differential entropy maximization
For the proof of Theorem 1 we first require the following lemmas. The first lemma is used for proving the optimality of the Gibbs distribution for maximizing Shannon entropy:
Let be such that , where is a finite or countably infinite set. Then for any defined on ,
is the Gibbs–Boltzmann distribution.
Let be such that , where is an uncountable set. Then for any defined on ,
is the Gibbs–Boltzmann distribution.
Let denote the Rényi entropy of order of discrete distribution , which for is defined as
Similarly, let denote the Rényi differential entropy of order of continuous distribution , which for is defined as
The following two lemmas show how to linearly combine an entropy with a relative entropy, using scaled distributions:
Let and be two discrete distributions and . We have
Let and be two continuous distributions and . Then
For simplicity of the proofs, we assume that all alphabets are standard Borel spaces, which guarantees the existence of regular conditional probabilities and reverse random transformations. Therefore, as a corollary of the chain rule of relative entropy, we have the following: