Bounds on the Entropy of a Function of a Random Variable and their Applications

12/21/2017 ∙ by Ferdinando Cicalese, et al. ∙ University of Verona University of Salerno 0

It is well known that the entropy H(X) of a discrete random variable X is always greater than or equal to the entropy H(f(X)) of a function f of X, with equality if and only if f is one-to-one. In this paper, we give tight bounds on H(f(X)) when the function f is not one-to-one, and we illustrate a few scenarios where this matters. As an intermediate step towards our main result, we derive a lower bound on the entropy of a probability distribution, when only a bound on the ratio between the maximal and minimal probabilities is known. The lower bound improves on previous results in the literature, and it could find applications outside the present scenario.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I The Problem

Let be a finite alphabet, and be any random variable (r.v.) taking values in according to the probability distribution , that is, such that , for . A well known and widely used inequality states that

(1)

where is any function defined on , and denotes the Shannon entropy. Moreover, equality holds in (1) if and only if the function is one-to-one. The main purpose of this paper is to sharpen inequality (1) by deriving tight bounds on when is not one-to-one. More precisely, given the r.v. , an integer , a set , and the family of surjective functions , we want to compute the values

(2)

While computing the is easy, the computation of turns out to be a challenging, but otherwise consequential, problem. It is also worth noticing that maximizing for random functions is trivial, since the value is always achievable.

Ii The Results

For any probability distribution , with , and integer , let us define the probability distributions as follows: if we set , whereas if we set , where

(3)

and is the maximum index such that . A somewhat similar operator was introduced in [19].

As suggested by one of the referees, the operator can also be explained in the following way. For a given r.v. distributed in the alphabet according to , the alphabet can be partitioned in two sets and . Now, a r.v. can be defined to be distributed just like conditioned on the event that , and uniformly over , conditioned on the event that . When the integer is chosen to be the largest possible that ensures that the probability distribution of remains ordered, then the probability distribution of is exactly .


We also define the probability distributions in the following way:

(4)

The following Theorem provides the results seeked in (2).

Theorem 1.

For any r.v. taking values in the alphabet according to the probability distribution , and for any , it holds that

(5)

where , and

(6)

Here, with a slight abuse of notation, for a probability distribution we denote with the entropy of a discrete r.v. distributed according to . Moreover, with we denote the logarithm in base 2, and with the natural logarithm in base .

Therefore, according to Theorem 1, the function for which is minimum maps all the elements to a single element, and it is one-to-one on the remaining elements .

Before proving Theorem 1 and discuss its consequences, we would like to notice that there are quite compelling reasons why we are unable to determine the exact value of the maximum in (5), and consequently, the form of the function that attains the bound. Indeed, computing the value is an NP-hard problem. It is easy to understand the difficulty of the problem already in the simple case . To that purpose, consider any function , that is , and let be any r.v. taking values in according to the probability distribution . Let Then,

and it is maximal in correspondence of a function that makes the sums and as much equal as possible. This is equivalent to the well known NP-hard problem Partition on the instance (see [17]). Actually, we can prove a stronger result. We first recall that problem is said to be strongly NP-hard if it is NP-hard even when all of its numerical parameters are bounded by a polynomial in the length of the input [17]. More importantly, any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have a fully polynomial-time approximation scheme unless [17].

Lemma 1.

The problem of computing is strongly NP-hard.

Proof:

The following reduction from the well known 3-Partition problem [42] proves the result. We recall that in the 3-Partition problem we are given a set of numbers , with , and such that each number satisfies The question is to decide whether it is possible to partition into subsets , such that and , for each This problem is known to be strongly NP-complete (see [42], Theorem 7.2.4).

We will reduce 3-Partition to our problem: Let be an instance of 3-Partition, and and be as above. Let be a random variable taking values in , distributed according to , where , for . Assume first that there exists a partition of the set such that for each we have and . Let be the function defined by stipulating that if and only if . It is easy to see that

Conversely, let be a function such that This implies that the random variable is equiprobable, i.e., for each we have that

Let Since, by definition, we have we have that for each it must hold Moreover, we have Hence, letting we have that is a partition of into sets of size with equal total sum. Therefore, we can map any instance of 3-Partition into an instance of our problem such that there exists a function with if and only if the -Partition instance admits the desired partition. ∎


In this paper we will also show that the function for which can be efficiently determined, therefore we also have the following important consequence of Theorem 1.

Corollary 1.

There is a polynomial time algorithm to approximate the NP-hard problem of computing the value

with an additive approximation factor not greater than .

Under the plausible assumption that , the strong NP-hardness result proved in Lemma 1 rules out the existence of polynomial time algorithms that, for any values of , compute a function such that . Therefore, we find it quite interesting that the problem in question admits the approximation algorithm with the small additive error mentioned in Corollary 1, since only a handful NP-hard optimization problems are known to enjoy this property. In Section IV-A we will also prove that the polynomial time algorithm referred to in Corollary 1 outputs a solution whose value is at least .

A key tool for the proof of Theorem 1 is the following result, proved in Section V.

Theorem 2.

Let be a probability distribution such that . If then

(7)

Theorem 2 improves on several papers (see [37]

and references therein), that have studied the problem of estimating

when only a bound on the ratio is known. Moreover, besides its application in the proof of the lower bound in (5), Theorem 2 has consequences of independent interest. In particular, in Section VI we will show how Theorem 2 allows us to provide a new upper bound on the compression rate of Tunstall codes for discrete memoryless and stationary sources. Our new bound improves the classical result of Jelinek and Schneider [21].

To prove our results, we use ideas and techniques from majorization theory [31], a mathematical framework that has been proved to be very much useful in information theory (e.g., see [3, 4, 19, 20] and references therein).

Iii Some Applications

Besides its inherent naturalness, the problem of estimating the entropy has several interesting consequences. We highlight some of them here.

Iii-a Clustering

In the area of clustering [16], one seeks a mapping (either deterministic or stochastic) from some data, generated by a r.v. taking values in a set , to “labels” in some set , where typically . Clusters are subsets whose elements are mapped to a same label . A widely employed measure to appraise the goodness of a clustering algorithm is the information that the clusters retain towards the original data, measured by the mutual information (see [15, 23] and references therein). In general, one wants to choose such that is small but is large. The authors of [18] (see also [25]) proved that, given the random variable , among all mappings that maximizes (under the constraint that the cardinality is fixed) there is a maximizing function that is deterministic. This is essentially a consequence of the fact that the mutual information is convex in the conditional probabilities. Since in the case of deterministic functions it holds that , it is obvious that finding the clustering of (into a fixed number of clusters) that maximizes the mutual information is equivalent to our problem of finding the function that appears in Corollary 1.333In the paper [25] the authors consider the problem of determining the function that maximizes , where is the r.v. at the input of a discrete memoryless channel and is the corresponding output. Our scenario could be seen as the particular case when the channel is noiseless. However, the results in [25] do not imply ours since the authors give algorithms only for binary input channels (i.e. , that makes the problem completely trivial in our case). Instead, our results are relevant to those of [25]. For instance, we obtain that the general maximization problem considered in [25] is strongly NP-hard, a fact unnoticed in [25]. We also remark that the problem of determining the function that maximizes the mutual information (under the constraint that the cardinality is fixed) has also been posed in [12, 13]. Other work that considers the general problem of reducing the alphabet size of a random variable , while trying to preserve the information that it gives towards another random variable , is contained in [26, 22]. Our work seems also related to the well known information bottleneck method [40], mainly in the “agglomerative” or “deterministic” version, see [38, 39]. This connections will be explored elsewhere.

Iii-B Approximating probability distributions with low dimensional ones

Another scenario where our results directly find applications is the one considered in [41]. There, the author considers the problem of best approximating a probability distribution with a lower dimensional one , . The criterion with which one chooses , given , is the following. Given arbitrary and , , define the quantity as , where is the minimum entropy of a bivariate probability distribution that has and as marginals. Equivalently, see [41, (9)] , where the minimization is with respect to all joint probability distributions of and such that the random variable is distributed according to and the random variable according to . A joint probability distributions of and such that the random variable is distributed according to a fixed and the random variable according to a fixed is usually called a coupling of and . Couplings (with additional properties) play an important role in information theory questions, e.g., [35].

Having so defined the function , the “best” approximation of is chosen as the probability distributions with components that minimizes , where the minimization is performed over all probability distributions . The author of [41] motivates this choice, shows that the function is a pseudo distance among probability distributions, and proves that can be characterized in the following way. Given , call an aggregation of into components if there is a partition of into disjoint sets such that , for . In [41]

it is proved that the vector

that best approximate (according to ) is the aggregation of into components of maximum entropy. We notice that any aggregation of can be seen as the distribution of the r.v. , where is some appropriate non-injective function and is a r.v. distributed according to (and, vice versa, any deterministic non-injective gives a r.v. whose distribution is an aggregation of the distribution of the r.v. ). Therefore, from Lemma 1 one gets that the problem of computing the “best” approximation of is strongly NP-hard. The author of [41] proposes greedy algorithms to compute sub-optimal solutions both to the problem of computing the aggregation of into components of maximum entropy and to the problem of computing the probability distributions with components that minimizes . Notice, however, that no performance guarantee is given in [41] for the aforesaid greedy algorithms. In Section VII we will show how the bound (5) allows us to provide an approximation algorithm to construct a probability distribution such that , considerably improving on the result we presented in [7], where an approximation algorithm for the same problem with an additive error of was provided.

Iii-C Additional relations

There are other problems that can be cast in our scenario. For instance, Baez et al. [1] give an axiomatic characterization of the Shannon entropy in terms of information loss. Stripping away the Category Theory language of [1], the information loss of a r.v. amounts to the difference , where is any deterministic function. Our Theorem 1 allows to quantify the extreme value of the information loss of a r.v., when the support of is known.

In the paper [9] the authors consider the problem of constructing the best summary tree of a given weighted tree, by means of some contractions operations on trees. Two type of contractions are allowed: 1) subtrees may be contracted to single node that represent the corresponding subtrees, 2) multiple sibling subtrees (i.e., subtrees whose roots are siblings) may be contracted to single nodes representing them. Nodes obtained by contracting subtrees have weight equal to the sum of the node weights in the original contracted subtrees. Given a bound on the number of nodes in the resulting summary tree, the problem studied in [9] is to compute the summary tree of maximum entropy, where the entropy of a tree is the Shannon entropy of the normalized node weights. This is a particular case of our problem, when the function is not arbitrary but has to satisfy the constraints dictated by the allowed contractions operations on trees.

Another related paper is [14], where the authors consider a problem similar to ours, but now is restricted to be a low-degree polynomial and

is the uniform distribution.

There is also a vast literature (see [30], Section 3.3, and references therein) studying the “leakage of a program […] defined as the (Shannon) entropy of the partition [30]. One can easily see that their “leakage” is the same as the entropy , where is the r.v. modeling the program input, and is the function describing the input-output relation of the program . In Section 8 of the same paper the authors study the problem of maximizing or minimizing the leakage, in the case the program is stochastic, using standard techniques based on Lagrange multipliers. They do not consider the (harder) case of deterministic programs (i.e., deterministic ’s) and our results are likely to be relevant in that context.

Our results are also related to Rota’s entropy-partition theory [27, 28]. Given a ground set , and a partition into classes of , the entropy of is defined as . Rota was interested in the decrease (resp. increase) of the entropy of a partition under the operation of coarsening (resp., refining) of a partition, where two or more classes of the partition are fused into a single class (resp., a class is split into two or more new classes). One can see that the decrease of due to the coarsening operation, for example, can be quantified by computing the entropy of , where is a r.v. distributed according to , and is an appropriate function.

Our problem can also be seen as a problem of quantizing the alphabet of a discrete source into a smaller one (e.g., [33]), and the goal is to maximize the mutual information between the original source and the quantized one. Our results have also relations with those of [29], where it is considered the problems of aggregating data with a minimal information loss.

Iv The Proof of Theorem 1

We first recall the important concept of majorization among probability distributions.

Definition 1.

[31] Given two probability distributions and with and , we say that is majorized by , and write , if and only if

We will make extensive use of the Schur concavity of the entropy function (see [31], p. 101) that says:

(8)

An important improvement of inequality (8) was proved in the paper [20], stating that

(9)

whenever have been ordered and . Here is the relative entropy between and . However, for most of our purposes the inequality (8) will be sufficient.

One can also extend the majorization relation to the set of all vectors of finite length by padding the shorter vector with zeros and applying Definition

1. This is customarily done in the literature (e.g., [34]). We also notice that this trick does not effect our results that uses (8), since adding zeros to a probability distribution does not change the entropy value .


The idea to prove Theorem 1 is simple. We shall first prove that for any function , , and for any r.v. distributed according to , it holds that is majorized by the probability distribution of the random variable . Successively, we will prove that the probability distribution defined in (3) is majorized by any such that (in particular, by the the probability distribution of the random variable , with ). These facts, together with the Schur concavity of the entropy function will prove the upper bound in (5). We prove the lower bound in (5) by explicitly constructing a function such that

Without loss of generality we assume that all the probabilities distributions we deal with have been ordered in non-increasing order. Since we will be working with functions of probability distributions that are invariant with respect to permutations of the variables, i.e., the Shannon entropy , this is not a restriction. We also use the majorization relationship between vectors of unequal lengths, by properly padding the shorter one with the appropriate number of ’s at the end. The well known assumption that allows us to do that.

Consider an arbitrary function , . Any r.v. taking values in , according to the probability distribution , together with the function , naturally induce a r.v. , taking values in according to the probability distribution whose values are given by the expressions

(10)

Let be the vector containing the values ordered in non-increasing fashion. For convenience, we state the following self-evident fact about the relationships between and .

Claim 1.

There is a partition of into disjoint sets such that , for .

We will call such a an aggregation of . In the paper [10] the authors use the different terminology of lumping, our nomenclature is taken from [41]. Given a r.v. distributed according to , and any function , by simply applying the definition of majorization one can see that the (ordered) probability distribution of the r.v. is majorized by , as defined in (4). Therefore, by invoking the Schur concavity of the entropy function we get that . From this, the equality (6) immediately follows.

Denote by

the -dimensional simplex. We need the following preliminary results.

Lemma 2.

For any , , it holds that

(11)
Proof:

According to Definition 1, we need to prove that

(12)

By the definition (3) of , inequalities (12) are trivially true for each . Moreover, by the definition of as the largest index for which holds, one has . Summing up to both sides of the previous inequality, one has

Therefore, . Since has its components ordered in non increasing order, one has also for .

In conclusion, since we have proved that , for all , and we also know that , we get that (12) is proved. ∎

Lemma 3.

Let , , and be any aggregation of . Then .

Proof:

We prove by induction on that .
Since is an aggregation of , one gets that there is a partition of such that for each In particular, there exists a subset such that . We then have . Suppose now that . If there exist indices and such that , then , that implies . Otherwise, for each and it holds that . Therefore, . This immediately implies that , from which we obviously get . ∎


In other words, for any r.v. and function , the probability distribution of is an aggregation of the probability distribution of . Therefore, the probability distribution of always majorizes that of . As a first consequence, from (9) we get that

(13)

where is the probability distribution of and is the probability distribution of . This is an improvement of the inequality ***To the best of our knowledge, the first appearance in print of this inequality in the equivalent form that that whenever , is in [36]. The first paper to present the explicit inequality seems to be [2]. that might be of independent interest. We highlight the inequality (13) in the following essentially equivalent formulation.

Corollary 2.

Let and be arbitrary r.v., distributed according to and , respectively. Then, if holds, one has that

Next Lemma proves that, among all probability distributions in that majorize a given , the vector defined in (3) is “minimal”, according to .

Lemma 4.

For any , and any it holds that

(14)
Proof:

Consider an arbitrary such that . By definition of and from (3), we have that , for . It remains to show that , also for . Suppose (absurdum hypothesis), that this is not the case and let be the smallest integer such that . Since , it follows that . Therefore, since , for , we have

As a consequence, there exists such that , contradicting the fact that . ∎


From Lemmas 3 and 4, and by applying the Schur concavity of the entropy function , we get the following result.

Corollary 3.

For any r.v. taking values in according to a probability distribution , and for any , it holds that

(15)

An equivalent way to say above facts, is that is the element of that solves the following constrained maximum entropy problem:

(16)

Above results imply that

(17)

where has been defined in (3). Therefore, to complete the proof of Theorem 1 we need only to show that we can construct a function such that

(18)

or, equivalently, that we can construct an aggregation of into components, whose entropy is at least We prove this fact in the following lemma.

Lemma 5.

For any and , we can construct an aggregation of such that

Proof:

We will assemble the aggregation through the Huffman algorithm. We first make the following stipulation. To the purposes of this paper, each step of the Huffman algorithm consists in merging the two smallest element and of the current probability distribution, deleting and and substituting them with the single element , and reordering the new probability distribution from the largest element to the smallest (ties are arbitrarily broken). Immediately after the step in which and are merged, each element in the new and reduced probability distribution that finds itself positioned at the “right” of (if there is such a ) has a value that satisfies (since, by choice, ). Let be the ordered probability distribution obtained by executing exactly steps of the Huffman algorithm, starting from the distribution . Denote by the maximum index such that for each the component has not been produced by a merge operation of the Huffman algorithm. In other word, is the maximum index such that for each it holds that . Notice that we allow to be equal to . Therefore has been produced by a merge operation. At the step in which the value was created, it holds that , for any at the “right” of . At later steps, the inequality still holds, since elements at the right of could have only increased their values.

Let be the sum of the last (smallest) components of . The vector is a probability distribution such that the ratio between its largest and its smallest component is upper bounded by 2. By Theorem 2, with , it follows that

(19)

where . Therefore, we have

(20)
(21)
(22)
(23)
(24)
(25)
(26)
(27)

We remark that inequality (26) holds since .

Let and observe that coincides with in the first components, as it does . What we have shown is that

(28)

We now observe that , where is the index that intervenes in the definition of our operator (see (3)). In fact, by the definition of one has , that also implies

(29)

Moreover, since the first components of are the same as in , we also have . This, together with relation (29), implies

(30)

Equation (30) clearly implies since is by definition, the maximum index such that From the just proved inequality , we have also

(31)

Using (28), (31), and the Schur concavity of the entropy function, we get