# Concentration without measure

Although there doesn't exist the Lebesgue measure in the ball M of C[0,1] with p-norm, the average values (expectation) EY and variance DY of some functionals Y on M can still be defined through the procedure of limitation from finite dimension to infinite dimension. In particular, the probability densities of coordinates of points in the ball M exist and are derived out even though the density of points in M doesn't exist. These densities include high order normal distribution, high order exponent distribution. This also can be considered as the geometrical origins of these probability distributions. Further, the exact values of a kind of infinite-dimensional functional integrals are obtained, and specially the variance DY is proven to be zero, and then the nonlinear exchange formulas of average values of analytic functionals are also given. Instead of measure, the variance is used to measure the deviation of functional and its average value. DY=0 means that a functional takes its average on a ball with probability 1 by using the language of probability theory, and this is just the concentration of measure phenomenon without measure.

## Authors

• 2 publications
01/26/2018

### Average values of functionals and concentration without measure

Although there doesn't exist the Lebesgue measure in the ball M of C[0,1...
10/06/2020

### The Base Measure Problem and its Solution

Probabilistic programming systems generally compute with probability den...
02/26/2018

### Dimension-free Information Concentration via Exp-Concavity

Information concentration of probability measures have important implica...
11/09/2020

### Discretization on high-dimensional domains

Let μ be a Borel probability measure on a compact path-connected metric ...
02/04/2021

### HMC, an Algorithms in Data Mining, the Functional Analysis approach

The main purpose of this paper is to facilitate the communication betwee...
07/11/2019

### Computational Concentration of Measure: Optimal Bounds, Reductions, and More

Product measures of dimension n are known to be concentrated in Hamming ...
07/23/2021

### Plinko: A Theory-Free Behavioral Measure of Priors for Statistical Learning and Mental Model Updating

Probability distributions are central to Bayesian accounts of cognition,...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

In complexity science and statistical physics in special, we often need to deal with high dimensional data and a large number of free degrees. Sometimes these data and free degrees can be considered as infinite-dimensional variables, and some physical quantities can be represented by infinite dimensional integrals. Therefore, we need study the infinite-dimensional integrals. The computations of integrals on functions with infinite number of variables is still an important and interesting topic in quantum and statistical physics or even in finance(see, for example, [1-3]). In 1920’s, in the works of Gâteaux and Lévy [4], the infinite-dimensional integrals had been considered and computed from the view of the point of probability theory. Further, Wiener integral became an important tool in stochastic processes theory[5-16]. Today, there exist a large number of papers devoting to the computations and applications of functional integrals. In particular, recently, some new algorithms such as multilevel and changing dimension algorithms or dimension-wise quadrature methods, are proposed to approximate such integrals efficiently[17-29]. From the popular viewpoint, the foundation of functional integration such as Wiener’s integral should be obtained on the theory of measure. However, Feynman’s path integral is still lacking of a satisfied measure theory. On the other hand, there are some interesting problems in infinite dimensional space so that we have to consider infinite dimensional integrations under the condition of nonexistence of Lebesque measure, while other measure such as Gauss measure is not suitable to our aims. For example, if we randomly take a continuous function

where , what is the average value of area ? Here the randomness means that we take the points in by equal possibility, and hence we need a Lebesque measure on . But it is well-known that there doesn’t exist the Lebesque measure on at all, While, conceptually, this problem is rather natural. Without measure theory, we can also use a limit procedure to give a rigorous definition of the average value (see below definition 1), and the average value of is easily solved. In the paper, we consider a more complex case in which is taken as the ball in with norm, that is, .

Although there doesn’t exist the Lebesque measure and then the density of points in doesn’t exist, we show that the probability densities of coordinates of points in the ball

do exist and are derived out with forms of high order normal and exponent distributions. Further, we define and compute the exact average values (which are represented by the finite dimensional integrals)(expectation) and variances of some functionals. If we formally consider these functionals as the infinite-dimensional random variables, the considered infinite-dimensional integral is just the expectation of the infinite-dimensional random variable. We show that the variances are zeros to prove that these functionals satisfy the property of the complete concentration of ”measure”. This is because if the measure exists,

means that a functional takes its average value on an infinite-dimensional ball with probability 1. In our cases of no Lebesque measure, we use to replace the complete concentration of measure under the probability meaning. Corresponding, we give the nonlinear exchange formulas for averages of functionals. The usual concentration of measure is described by some inequalities such as Lévy lemma[30-32], which is different with the complete concentration of ”measure” which is shown by variance being zero.

Abstractly, a functional is a function of where is an element in an infinite-dimensional space such as . In general, there are two basic ways to construct functionals. One method is to use the values of on some points such that

 f(x)=g(x(t1),⋯,x(tm)),

where is a usual function in . Essentially, such functionals all are finite dimensional functions. Another method is to use integral of on some sets such that

 Y=f(x)=∫I1⋯∫Img(x(t1),⋯,x(tm))dt1⋯dtm,

where are subsets of the interval . Such functionals all are really infinite-dimensional functions. Therefore, there are two kinds of basic elements and such that many interesting functionals can be constructed in terms of them by addition, subtraction, multiplication, division, composition and limitation.

For the first kind of functionals, the functional integral is just the usual finite-dimensional integral. Thus we only consider the second kind of functionals . If the domain of the functional is , the integral of on can be formally written as

 ∫Mf(x)D(x), (1)

where represents formally the differential element of the volume of . But, in general, under the meaning of Lebesque’s measure, the volume of is zero or infinity, and the infinite-dimensional integral is also respectively zero or infinity. However, the average value of functional on ,

 Ef=∫Mf(x)D(x)∫MD(x) (2)

perhaps exists and is finite or infinite in general. Firstly, we need a reasonable definition of the average value of functional. Since there doesn’t exist the Lebesque measure in infinite-dimensional space in general, our approach is to use a limit procedure to define the average value of functionals. For example, similar to Gâteaux and Lévy (see, [4]), we give the following definition.

Definition 1: For and where is a continuous function on , we can define the average value of as

where . If the limitation exists and is finite or infinite, we call it the average value of functional on .

Remark 1. Since is continuous on , for any where , the above limitation is independent to the choice of . Of important is that and must take the same and . From theorem 7 in section 4, we can see that the average value depends on the discretization! In addition, for variables function , the average value of the functional can be defined similarly. More general, if are two fixed continuous functions, we can give corresponding definition of the average value of for ,

 Ef=limn→∞∫b1a1⋯∫bnan1n∑nk=1g(xk)dx1⋯dxn∫b1a1⋯∫bnandx1⋯dxn,

where .

Here, we must point out that the equiprobability (or equal possibility) hypothesis is implicated in the definition, that is, the points in are taken by equiprobability. This is a natural assumption. For example, for the aforementioned problem, if we take randomly a continuous function , what is the average value of its area ? In the problem, we have implicitly supposed that we take the function in by equiprobability. However, this is just an intuitive and formal explanation in infinite-dimensional cases because in general there exists no Lebesque measure on as the probability measure to give the meaning of equiprobability ([16]). But in finite dimensional cases this assumption has strict mathematical foundation since there exists the Lebesque’s measure as the corresponding probability measure such that we can talk about reasonably equiprobability. In the whole paper, when we say equiprobability, it is just the meaning here. Below we give such definition.

Definition 2. Let be a bounded set in the space of all continuous functions on with some norm. If for any finite-dimensional subset of , the points in are taken by equiprobability, we say that the points in are taken by equiprobability (or equal possibility).

In the paper, since we consider the average values of functionals on infinite-dimensional ball, we need a definition of the average value on the ball. Below we give such definitions.

Definition 3: For where is equipped norm for and where is a continuous function, and where is even and , we can define the average value of on as

 EY=limn→∞∫Mn1n∑nk=1g(xk)dvn∫Mndvn (4)

where , and is the volume element of . If the limitation exists and is finite or infinite, we call it the average value of functional on . We often use to denote the average value of functional .

Definition 4: For where is equipped norm and where is a continuous function, and or specially where

is odd and

, we can define the average value of on as

 EY=limn→∞∫M+n1n∑nk=1g(xk)dvn∫M+ndvn (5)

where , and is the volume element of . If the limitation exists and is finite or infinite, we call it the average value of functional on .

For variables function , the average value of the functional on infinite-dimensional balls can be defined similarly.

In addition, since there doesn’t exist the Lebesque measure on , the probability theory based on the lebesque measure also doesn’t exist. Therefore, under the rigorous mathematical meaning, we cannot say what is the probability of the functional deviating its average value . However, in order to measure the deviation of from , we can still define the variance to do this because the variance is also average value of the functional , that is, .

For the purpose of discussion on nonlinear exchange formula, we need the definition of average value of where is a continuous or analytic function of .

Definition 5. Denote as the previous or , where is a continuous function, then for a continuous function , we can define the average value of on as

 Eh(Y)=limn→∞∫Mnh(1n∑nk=1g(xk))dvn∫Mndvn (6)

where , and is also previous or . If the limitation exists and is finite or infinite, we call it the average value of functional on .

This paper is outlined as follows. In section 2, we drive out some probability densities of coordinates of points in infinite-dimensional balls by two ways of analysis and geometry. In section 3, we give the exact values of some infinite-dimensional integrals. Furthermore, we discuss the concentration without measure, and obtain the nonlinear exchange formulas for infinite-dimensional integrals. In section 4, we give some further results and definitions. The last section is short conclusion.

## 2 The probability densities of the coordinates of points in infinite-dimensional balls

We first derive several interesting probability densities from a geometrical way based on the consideration in infinite dimensional space. These results have also independent values.

Consider the continuous functions space and define some norms such as , and for . For where is even and , we consider the whole ball , while for being a general real number or specially where is odd and , we only consider the ”first quadrant” of , that is .

The following lemma is important.

Lemma 1([33]). The following generalized Dirichlet formula holds

 ∫⋯∫B+xp1−11xp2−12⋯xpn−1ndx1⋯dxn=12nΓ(p12)⋯Γ(pn2)Γ(1+p1+⋯+pn2), (7)

where for and .

Next we give the following results.

Theorem 1(Version of analysis). For the set where and is even and , when we suppose that the points in are taken by equiprobability, the density of every coordinate of as a random variable is given by

 ρn(xk)=pΓ(1+np)2Rn1pΓ(1p)Γ(1+n−1p)(1−xpknRp)n−1p. (8)

In particular, the limitation of as tending to infinity is given by

 ρ(x)=12RΓ(1p)p1p−1e−xppRp,x∈(−∞,+∞). (9)

In general, for any distinct coordinates , where , their union density is given by

 ρn(xi1,⋯,xik)=pkΓ(1+np)2kRknkpΓk(1p)Γ(1+n−kp)(1−xpi1+⋯+xpiknRp)n−kp, (10)

and the limitation of as approaching to infinity is

 ρ(xi1,⋯,xik)=12kRkΓk(1p)pkp−ke−xpi1+⋯+xpikpRp,xij∈(−∞,+∞),j=1,⋯,k, (11)

that is,

 ρ(xi1,⋯,xik)=ρ(xi1)⋯ρ(xik), (12)

which means that when tends to infinity, any finite coordinates of point in as random variables are independent.

Proof. By symmetry, we only consider the density of . Denote . According to the assumption of equiprobability, we have

 ρn(x1,⋯,xk)=∫M′ndxk+1⋯dxn∫Mndx1⋯dxn. (13)

Further, by taking the transformation , we have from the lemma 1,

 ρn(x1,⋯,xk)=pkΓ(1+np)2kRkΓk(1p)nkpΓ(1+n−kp)(1−xp1+⋯+xpknRp)n−kp. (14)

Taking the limitation of approaching to , and using the Stirling’s asymptotic formula of Gamma function, , we get,

 ρ(x1,⋯,xk)=limn→+∞pk√2π√np(np)npe−np2kRkΓk(1p)√2π√n−kp(n−kp)n−kpnkpe−n−kp(1−xp1+⋯+xpknRp)n−kp
 =limn→+∞pk√nn−k(nn−k)kp2kRkΓk(1p)(n−kn)npekp(1−xp1+⋯+xpknRp)n−kp
 =12kRkpkp−kΓk(1p)e−xp1+⋯+xpkpRp. (15)

The proof is completed.

Now we consider the infinite-dimensional ball in with

norm. Although a uniform distribution mathematically does not exist on the ball

because the dimension on is infinite, the density of coordinates of points in does exist! Fortunately, we needn’t this uniform distribution to derive our the result. What we only need is a limit procedure from finite dimension to infinite dimension so that we can avoid the trouble of nonexistence of uniform distribution. The following is the version of geometry of theorem 1 under the meaning of definition 2. In other words, this is just a probability ”explanation” in formal.

Theorem 1(Version of geometry). For the ball in where and is even and , when we suppose that the points in the ball are taken by equiprobability under the meaning of definition 2, the density of as a random variable for fixed is given by

 ρ(x)=12RΓ(1p)p1p−1e−xppRp,x∈(−∞,+∞). (16)

In general, for any distinct coordinates , where , their union density is given by

 ρ(x1,⋯,xk)=12kRkΓk(1p)pkp−ke−xp1+⋯+xpkpRp,xj∈(−∞,+∞),j=1,⋯,k, (17)

that is,

 ρ(x1,⋯,xk)=ρ(x1)⋯ρ(xk), (18)

which means that any finite coordinates of point in as random variables are independent.

Proof. Firstly, by discretization(it is reasonable by the continuity of ), we have where . We direct compute the density of as the random variables in the ball . According to the assumption of equiprobability and the version of analysis of the theorem 1, the theorem is proven.

This result is different to the finite case essentially. In finite ball, coordinates are not independent each other since they are constrained on the ball and there exists a certain relation. But in infinite dimensional ball, for any finite number of coordinates of point in the ball, from its discretization , we can easily see that as tending to infinity, the radius tends to also infinity, so, for any finite number of coordinates such as , their value ranges will become the whole dimensional space . This means that the constraint has disappeared and hence these coordinates are really independent. In other words, essentially, the ball contains all finite dimensional linear spaces for any positive integer .

Similarly, we have the following theorems.

Theorem 2(Version of analysis). For the ”first quadrant” where is a general real number and or specially where is odd and , when we suppose that the points in are taken by equiprobability, the density of every coordinate of as a random variable is given by

 ρn(xk)=pΓ(1+np)Rn1pΓ(1p)Γ(1+n−1p)(1−xpknRp)n−1p. (19)

In particular, the limitation of as tending to infinity is given by

 ρ(x)=1RΓ(1p)p1p−1e−xppRp,x∈[0,+∞). (20)

In general, for any distinct coordinates , where , their union density is given by

 ρn(xi1,⋯,xik)=pkΓ(1+np)RknkpΓk(1p)Γ(1+n−kp)(1−xpi1+⋯+xpiknRp)n−kp, (21)

and the limitation of as approaching to infinity is

 ρ(xi1,⋯,xik)=1RkΓk(1p)pkp−ke−xpi1+⋯+xpikpRp,xij∈[0,+∞),j=1,⋯,k, (22)

that is,

 ρ(xi1,⋯,xik)=ρ(xi1)⋯ρ(xik), (23)

which means that when tends to infinity, any finite coordinates of point in as random variables are independent.

Proof. According to the assumption of equiprobability, we have from the lemma 1,

 ρn(x1,⋯,xk)=∫⋯∫M′+ndxk+1⋯dxn∫M+ndx1⋯dxn
 =pΓ(1+np)Γ(1p)Γ(1+n−1p)(nRp−xp1)1p(1−xp1nRp)np, (24)

where . Taking the limitation of approaching to , and using the Stirling’s asymptotic formula of Gamma function, we get,

 ρ(x1,⋯,xk)=limn→+∞pk√2π√np(np)npe−np2kRkΓk(1p)√2π√n−kp(n−kp)n−kpnkpe−n−kp(1−xp1+⋯+xpknRp)n−kp
 =1Rkpkp−kΓk(1p)e−xp1+⋯+xpkpRp. (25)

The proof is completed.

Similar to the theorem 1, we have the version of geometry of the theorem 2.

Theorem 2(Version of geometry). For the ”first quadrant” of where is a general real number and or specially where is odd and , when we suppose that the points in are taken by equiprobability, the density of as a random variable on for fixed is given by

 ρ(x)=1Rp1p−1Γ(1p)e−xppRp,x∈[0,+∞). (26)

In general, for any distinct coordinates , where , their union density is given by

 ρ(x1,⋯,xk)=1RkΓk(1p)pkp−ke−xp1+⋯+xpkpRp,xj∈[0,+∞),j=1,⋯,k, (27)

that is,

 ρ(x1,⋯,xk)=ρ(x1)⋯ρ(xk), (28)

which means that any finite coordinates of point in as random variables are independent.

Remark 2. If we take some special values of and suitable variable transformation, we will gain some interesting and important probability distributions. When is even, the density looks like a normal distribution, and thus we call it high order normal distribution or normal-like distribution. If , a simple form is

 ρ(x)=12Γ(1p)p1p−1e−xpp,x∈(−∞,+∞). (29)

Further, for example, if , we get the standard normal distribution

 ρ(x)=1√2πe−x22,x∈(−∞,+∞), (30)

which gives the Gâteaux and Lévy’s result [4]. If , we get a 4-order normal distribution

 ρ(x)=√2Γ(14)e−x44,x∈(−∞,+∞). (31)

When is odd, the density looks like an exponent distribution, and thus we call it high order exponent distribution or exponent-like distribution. For example, if and , we get the usual exponent distribution

 ρ(x)=1λe−λx,x∈[0,+∞). (32)

If and , we get the 3-order exponent distribution

 ρ(x)=3λ13Γ(13)e−λx3,x∈[0,+∞). (33)

By a simple transformation, we can obtain the famous Gamma distribution in statistics. Indeed, we take a transformation

 Z=xppβ, (34)

then the density of is just

 ρ(z)=β1pΓ(1p)z1p−1e−βy. (35)

Further, taking gives the Gamma distribution

 ρ(z,α,β)=βαΓ(α)zα−1e−βy. (36)

This is a geometrical origin of the Gamma distribution. We can see that this is a rather natural way to derive the Gamma distribution.

Based on the maximum non-symmetrical entropy principle, we can also derive out some distributions [34,35]. But the above geometrical origin is more natural.

## 3 The average values of some functionals and the concentration without measure

In the section, according to the above results, we study a kind of infinite-dimensional functionals with the form of integral. Our main results are summarized in theorems 3 and 4. The considered infinite-dimensional integrals arise from the infinite-dimensional probability theory[36]. An elementary and rough introduction on such topic can be seen in [36] in which we give more examples and another way to compute the exact average values of some functionals.

Lemma 2. If satisfies one of the following two conditions,

. (Differential condition): , for , and there is a positive constant number such that ;

. (Integral condition):

 ∫+∞0|f(x)|e−xpdx<+∞,∫+∞0x2|f(x)|e−xpdx<+∞; (37)

then we have

 (38)

In general, for any finite integer , we also have

 limn→+∞∫n0f(x)(1−xn)n−n0pdx=∫+∞0f(x)e−xpdx. (39)

Proof. Case (i). We prove the general formula. Under the condition . From integration by parts, we have

 ∫n0f(x)(1−xn)n−n0pdx=pnn−n0+pf(0)+pnn−n0+p∫n0f′(x)(1−xn)n−n0p+1dx
 =pnn−n0+pf(0)+p2n2(n−n0+p)(n−n0+2p)f′(0)+⋯+pknk(n−n0+p)⋯(n−n0+kp)f(k−1)(0)
 +pknk(n−n0+p)⋯(n−n0+kp)∫n0f(k)(x)(1−xn)n−n0p+kdx, (40)

and

 ∫+∞0f(x)e−xpdx=pf(0)+p2f′(0)+⋯+pkf(k−1)(0)+pk∫+∞0f(k)(x)e−xpdx. (41)

It is easy to see that we only need to prove

 limn→+∞∫n0f(k)(x)(1−xn)n−n0p+kdx=∫+∞0