# Guesswork Subject to a Total Entropy Budget

We consider an abstraction of computational security in password protected systems where a user draws a secret string of given length with i.i.d. characters from a finite alphabet, and an adversary would like to identify the secret string by querying, or guessing, the identity of the string. The concept of a "total entropy budget" on the chosen word by the user is natural, otherwise the chosen password would have arbitrary length and complexity. One intuitively expects that a password chosen from the uniform distribution is more secure. This is not the case, however, if we are considering only the average guesswork of the adversary when the user is subject to a total entropy budget. The optimality of the uniform distribution for the user's secret string holds when we have also a budget on the guessing adversary. We suppose that the user is subject to a "total entropy budget" for choosing the secret string, whereas the computational capability of the adversary is determined by his "total guesswork budget." We study the regime where the adversary's chances are exponentially small in guessing the secret string chosen subject to a total entropy budget. We introduce a certain notion of uniformity and show that a more uniform source will provide better protection against the adversary in terms of his chances of success in guessing the secret string. In contrast, the average number of queries that it takes the adversary to identify the secret string is smaller for the more uniform secret string subject to the same total entropy budget.

## Authors

• 1 publication
• 24 publications
• 2 publications
• 47 publications
• 6 publications
• ### Non-Malleable Secret Sharing against Affine Tampering

Non-malleable secret sharing was recently studied by Goyal and Kumar in ...
02/17/2019 ∙ by Fuchun Lin, et al. ∙ 0

• ### On the Decision Tree Complexity of String Matching

String matching is one of the most fundamental problems in computer scie...
12/28/2017 ∙ by Xiaoyu He, et al. ∙ 0

• ### Wiretap Secret Key Capacity of Tree-PIN

Secret key agreement (SKA) is an essential primitive in cryptography and...
03/14/2019 ∙ by Alireza Poostindouz, et al. ∙ 0

• ### Attack Synthesis for Strings using Meta-Heuristics

Information leaks are a significant problem in modern computer systems a...
07/26/2019 ∙ by Seemanta Saha, et al. ∙ 0

• ### The Query Complexity of a Permutation-Based Variant of Mastermind

We study the query complexity of a permutation-based variant of the gues...
12/20/2018 ∙ by Peyman Afshani, et al. ∙ 0

• ### Sophisticated Attacks on Decoy Ballots: The Devil's Menu and the Market for Lemons

Decoy ballots do not count in election outcomes, but otherwise they are ...
12/14/2017 ∙ by Hans Gersbach, et al. ∙ 0

• ### An Iterative Security Game for Computing Robust and Adaptive Network Flows

The recent advancement in cyberphysical systems has led to an exponentia...
11/24/2019 ∙ by Supriyo Ghosh, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Introduction

We consider the problem of identifying the realization of a discrete random variable

by repeatedly asking questions of the form: “Is x the identity of X?”. This problem has been extensively studied by cryptanalysts who try to identify a secret key by exhaustively trying out all possible keys, where it is usually assumed that the secret key is drawn uniformly at random. We consider an -tuple drawn from an i.i.d. source, on a finite alphabet where represents the corresponding categorical distribution, which is not necessarily uniform. We measure security against a brute-force attacker who knows the source statistics completely, and who would query all the secret strings one by one until he is successful.

Denoting the number of guesses by , the optimal strategy of the attacker that minimizes the expected number of queries is to guess the possible realizations of

in order of decreasing probability under

. Massey [1] proved that the Shannon entropy of , , is a lower bound on the rate of growth of the expected guesswork, yet there is no upper bound on in terms of . Arıkan [2]

proved that when we consider a string of growing length whose characters are drawn i.i.d, the positive moments of guesswork associated with the optimal strategy grow exponentially, and the exponents are related to the Rényi entropies of the single letter distribution:

111In this paper, denotes the natural logarithm.

 limn→∞1nlogEθ[(Gnθ(Xn))ρ]=H1/(1+ρ)(X), (1)

where the Rényi entropy of order is

 Hρ(X)=11−ρlog(∑x∈XP(X=x)ρ). (2)

Note that recovers the Shannon entropy. We also use the notations and

interchangeably to refer to the Rényi entropy of a string drawn from a source with parameter vector

. Although these connections have been extended to more general stochastic processes [3, 4], in this paper, we focus on i.i.d. processes for the sake of clarity of presentation.

Christiansen and Duffy [5] showed that the sequence satisfies a Large Deviations Principle (LDP) and characterized its rate function, . Beirami et al. [6, 7] showed that can be expressed as a parametric function of the value of a “tilt” in a family of tilted distributions.

We remark that when the metric of difficulty is the growth rate in the expected number of guesses as a function of string length, the challenge for the adversary remains the same even if the adversary does not know the source statistics [8, 9].

In this paper, we first show a counter intuitive result that the average guesswork increases when the source becomes “less uniform” if the user is subject to a total entropy budget on the secret string. Next, we introduce a natural notion of total guesswork budget on the attacker and show that the probability of success of an adversary subject to a total guesswork budget increases when the source becomes “less uniform,” which is consistent with our intuition of choosing uniform passwords. We will formalize these notions in the rest of this paper.

## Ii Problem Setup

Given a finite alphabet , a memoryless (i.i.d) source on is defined by the set of probabilities for all , where and . Hence, is an element of the -dimensional probability simplex. We define as the open set of all probability vectors such that for all , which also excludes the uniform source

The tilt operation plays a central role in the analysis, and is the basis for many of our derivations:

###### Definition 1 (tilted θ of order α[6]).

For any , define as the “tilted of order ”, where , where for all is given by

 τi(θ,α):=θαi∑|X|\i=1θαi. (3)
###### Definition 2 (tilted family of θ).

Let denote the “tilted family of ” and be given by

 Γ+θ:={τ(θ,α):α∈R>0}. (4)

Observe that is a continuum of stochastic vectors in the probability simplex. Thus, the tilted family of a memoryless string-source with parameter vector is comprised of a set of memoryless string-sources whose parameter vectors belong to the tilted family of the vector , i.e., .

###### Definition 3 (high-entropy/low-entropy members of tilted family of θ).

Let and denote the sets of high-entropy and low-entropy members of the tilted family of , respectively, and be given by:

 ¯¯¯¯Γ+θ={τ(θ,α)}0≤α<1,Γ––+θ={τ(θ,α)}α>1. (5)

Hence,

Figure 1 depicts the probability simplex of all possible ternary parameter vectors, . The yellow star represents the distribution . Note that the tilted family of is parametrized by . At , we get the uniform distribution and as , we get to the degenerate case of . The high-entropy and low-entropy members of the tilted family of are represented by blue and red, respectively. Note that all distributions in the high-entropy set, , have Shannon entropies higher than that of and are closer to the uniform distribution in the KL divergence sense [7]. Hence, the higher entropy members of the tilted family are “more uniform” than the lower entropy members of the tilted family.

###### Definition 4 (entropy budget per source character).

Let denote the entropy budget per source character such that the user is required to choose a secret string from an i.i.d. process with parameter vector with .

The concept of a total entropy budget on the entire secret string is a natural one or the user would choose an arbitrarily complex secret string. We use the entropy budget per source character defined above to ensure that the user is subject to the same total entropy budget by adjusting the length of the secret string for a fair comparison between string sources that have different entropy rates.

## Iii Positive Moments of Guesswork

We first consider choosing strings with the same total (Shannon) entropy budget and measure security in terms of the positive moments of guesswork. If two sources have different entropy rates, we adjust the comparison by drawing a longer string from the lower entropy source. Formally, let us consider two sources with parameter vectors and on alphabet . Further, let and be the entropy rates of the two sources. Let the entropy ratio be

 η:=H(θ2)H(θ1). (6)

Without loss of generality, throughout this paper we assume that , and hence . The user is given the option to choose a secret string from either of the two sources. For a fair comparison, we assume that the entropy of the two strings is the same, . That is

 n2 =1ηn1. (7)

To compare the growth rates of the positive moments of guesswork, in light of (1), we compare and . This will in turn impose the same total entropy budget on the strings drawn from the sources with parameter vectors and .

For a parameter vector , let an information random variable be defined as one that it takes the value with probability for all . We need one more definition before we can state the result of this section:

###### Definition 5 (skewentropy condition (SEC)).

A source with parameter vector is said to satisfy the skewentropy condition (SEC) if

 V(θ)2+2H(θ)V(θ)−H(θ)S(θ)>0, (8)

where

is the varentropy defined as the variance of an information random variable corresponding to

:

 V(θ):=∑i∈[|X|]θi(log1θi−H(θ))2. (9)

and

is the skewentropy, which is the skewness of an information random variable corresponding to

:

 S(θ):=∑i∈[|X|]θi(log1θi−H(θ))3. (10)

Note that varentropy has been studied extensively and naturally arises in the finite block length information theory [10, 11], and more recently in the study of polar codes [12]. To the best of our knowledge, skewentropy has not been studied before, and we provide some properties of the SEC in Section V.

Equipped with this definition, we provide an ordering of the sources that belong to the same tilted family.

###### Theorem 1.

Let . For any ,

 H1/(1+ρ)(θ1)<1ηH1/(1+ρ)(θ2)∀ρ>0, (11)

if and only if satisfies the SEC in Definition 5. Note that is the entropy ratio defined in (6).

The proof is provided in the appendix. Theorem 11 provides a natural ordering of sources that belong to the same tilted family. The “less uniform” low per-character entropy members of the tilted family take exponentially more number of queries, on the average, to breach compared to their more uniform higher per character entropy counterparts.

###### Corollary 2.

Let denote the uniform source. Then for any , and any ,

 log|X|=H1/(1+ρ)(u|X|)<1ηH1/(1+ρ)(θ),

where

Corollary 2 suggests that, of all sources whose parameter vectors are in the (interior of the) probability simplex, the uniform source is the easiest to breach in terms of the positive moments of guesswork when the user is subject to a total entropy budget. This is in contrast to our intuition that more uniformity provides better security.

## Iv Probability of Success subject to a Guesswork Budget

In this section, we put forth a natural notion of total guesswork budget, leading to a security metric consistent with our intuition. Similar to the case of an entropy budget, we need to define guesswork budget per source character for our analysis.

###### Definition 6 (guesswork budget per source character).

Let denote the guesswork budget per source character, such that is the total number of queries that the inquisitor can make in order to identify a secret string of length .

Note that by this definition, the inquisitor is supposed to possess the resources for querying an exponentially growing number of strings (with the sequence length). In particular, corresponds to an adversary who is capable of querying all of the possible outcomes of the source to successfully identify the secret string with probability .

###### Lemma 1.

If , then

 limn→∞Pθ[Gθ(Xn)≤egn]=0,

and if , then

 limn→∞Pθ[Gθ(Xn)≤egn]=1.

Recall that Arıkan [2] showed that the growth rate of the moments of guesswork is governed by atypical sequences resulting in the appearance of the Rényi entropies in the expression. On the other hand, Lemma 1 states that the cutoff for the adversary to be successful with high probability is still governed by the Shannon entropy (as intuitively expected).

In the regime where we would like to study the behavior of correct guessing. The next lemma relates the exponent of an exponentially large number of possible guesses to the LDP rate function.

###### Lemma 2.

If then

 limn→∞1nlog1Pθ[Gθ(Xn)≤egn]=Λ∗θ(g). (12)

Hence, , and a larger directly implies a more secure source against a brute-force attacker who is subject to a guesswork budget for a fixed . We use the above rate function as the metric for comparing two string-sources given a total guesswork budget, naturally defined as .

Using the notion of the tilt, we can represent the rate function as a parametric function of for a family of tilted distributions. The rate function, , associated with can be directly computed as [7]:

 Λ∗θ(g)=D(τ(θ,α(g))∥θ), (13)

for . This characterization plays a central role in our derivations.

Recall that we adjust the string lengths in order to make sure that the secret string chosen by the user is subject to a given total entropy budget. As the idea of the total guesswork budget is that the adversary can make a fixed number of queries regardless of the source from which the user is choosing the password, we compare the sources in terms of the probability of success subject to an adjusted guesswork budget per source character (see (12)). To keep the total guessing budget of the adversary the same, i.e., we must adjust the guesswork budget per source character as follows:

 g2=ηg1. (14)

In light of (14), we compare with for sources with parameter vectors and .

We are now ready to provide our results on the adversary’s probability of success.

###### Theorem 3.

Let . For any ,

 Λ∗θ1(g1)>1ηΛ∗θ2(g2),∀g1

if and only if satisfies the SEC (see Definition 5).

We remark that the same SEC appears to be the crucial quantity for the statement of Theorem 15 to hold. This theorem implies that when the adversary is subject to a guesswork budget (i.e., he can only submit queries to identify a secret string of length ) for some , then the chances of correctly identifying the random string produced by a “more uniform” high per-character entropy member of the tilted family is exponentially smaller than that of the less uniform low per-character entropy source belonging to the same tilted family so long as the source satisfies the SEC when the user is subject to the same total entropy budget and the adversary is subject to the same total guesswork budget. In particular, the uniform source is the most secure against such an adversary subject to a guesswork budget:

###### Corollary 4.

Let denote the uniform information source. Then, for any and , we have

 log|X|−g=Λ∗u|X|(g)>1ηΛ∗θ(ηg), (16)

where .

We remark that these security guarantees are against an adversary that is not powerful enough to be able to explore the entire typical set rendering his chances of success exponentially small. The “more uniform” sources provide an exponentially smaller chance to such an adversary to be successful.

We emphasize that the implications of Theorems 11 and 15 are in stark contrast to each other. On the one hand, more uniformity results in an exponential decrease in the number of queries expected of an adversary to correctly identify a secret string when the user is subject to a total entropy budget (Theorem 11). On the other hand, more uniformity decreases the chances of an adversary in identifying the secret string when the adversary’s power is limited by a total guesswork budget as well (Theorem 15).

## V Properties of the SEC

Noting that SEC introduced in Definition 5 is a new concept, we study this condition in more detail in this section. Let us start with the binary memoryless sources.

###### Lemma 3.

Let . Further, let . Then,

 H(θ) =ϕlog(1ϕ)+(1−ϕ)log(11−ϕ), (17) V(θ) =ϕ(1−ϕ)log2(1−ϕϕ), (18) S(θ) =ϕ(1−ϕ)(1−2ϕ)log3(1−ϕϕ). (19)

The next theorem is our main result for binary memoryless sources:

###### Theorem 5.

Any satisfies the SEC.

While Theorem 5 shows that all binary memoryless sources satisfy the SEC, the same argument does not extend to larger alphabets.

###### Theorem 6.

For any there exists , such that does not satisfy the SEC.

Despite the negative result in Theorem 6, we show that sources that are approximately uniform satisfy the SEC for any alphabet size. Here is the key result for such sources:

###### Theorem 7.

Suppose that is such that

 ∣∣∣log1θi−H(θ)∣∣∣<2,∀i∈[|X|]. (20)

Then satisfies the SEC.

As a corollary, we state the condition more explicitly in terms of ’s.

###### Corollary 8.

Suppose that is such that

 e−1|X|<θi

Then, satisfies the SEC.

Figure 2 depicts the set of ternary distributions that do not satisfy the SEC. As can be seen, source close to uniform satisfy the SEC while sources that are close to uniform on a two-dimensional alphabet while almost missing the third character in the alphabet do not satisfy the SEC.

## Vi Numerical Experiments

In this section, we provide some numerical experiments. We compare several binary sources, where is the source parameter vector. The parameter vectors used for the experiments are listed in Table I. The length and the parameter vector are chosen such that nats for all of the pairs. Although the theorems proved in this paper are of asymptotic nature, we have chosen to run experiments on finite-length sequences instead to emphasize the applicability of the results even in very short lengths. As can be seen in Fig. 3, as the entropy rate of the source decreases, the moments of guesswork increase exponentially subject to the same entropy budget. On the other hand, as shown in Fig. 4, as the entropy rate of the source decreases, the chances of an adversary subject to a fixed total guesswork budget increases, which is consistent with our intuition.

## Vii Conclusion

In this paper, we studied guesswork subject to a total entropy budget. We showed that the conclusions about security deduced from the analysis of the average guesswork could be counter-intuitive in that they suggest that the uniform source is not the strongest source against brute-force attacks. To remedy the problem, we introduced the concept of total guesswork budget, and showed that if the adversary is subject to a total guesswork budget, the uniform source provides the strongest security guarantees against the brute-force attacker, which is consistent with our intuition.

[Proofs]

###### Proof:

This is equivalent to showing that for all ,

 H1/(1+ρ)(θ2)H(θ2)>H1/(1+ρ)(θ1)H(θ1) (22)

for all . Let , and hence . The statement above is in turn equivalent to showing:

 ∂∂α[Hβ(τ(θ1,α))H(τ(θ1,α))]α=1>0,∀β<1. (23)

It is straightforward to show that (76) is equivalent to

 ∂∂α[Hβ(τ(θ1,α))]α=1Hβ(θ1)>∂∂α[H(τ(θ1,α))]α=1H(θ1),∀β<1. (24)

Finally, we prove the following statement that is equivalent to (24):

 (25)

This is equivalent to showing:

 ∂2∂α∂β[Hβ(τ(θ1,α))]α=β=1H(θ1) < ∂∂β[Hβ(θ1)]β=1∂∂α[H(τ(θ1,α))]α=1. (26)

The above statement is shown to hold if and only if satisfies the SEC (Definition 5) invoking Lemmas 45, and 6, which completes the proof of the theorem. ∎

###### Lemma 4.

For all , we have

 ∂∂α[H(τ(θ,α))]α=1=−V(θ). (27)

See [7] for the proof.

###### Lemma 5.

For all , we have

 ∂∂β[Hβ(θ)]β=1=−12V(θ). (28)

See [7] for the proof.

###### Lemma 6.

For all , we have

 ∂2∂α∂β[Hβ(τ(θ,α))]α=β=1=−V(θ)+12S(θ). (29)
###### Proof:

It is proved in [7] that

 ∂∂α[Hβ(τ(θ,α))]α=1=β1−β(H(θ)−H(τ(θ,β)||θ)). (30)

Hence, we differentiate with respect to to get:

 ∂2∂α∂β[Hβ(τ(θ,α))]α=1 =1(1−β)2(H(θ)−H(τ(θ,β)||θ)) +β1−βV(τ(θ,β)||θ).

Next, we take the limit as , and by applying L’Hospital’s rule we arrive at:

 ∂2∂α∂β[Hβ(τ(θ,α))]α=β=1=−V(θ)−12∂∂β[V(τ(θ,β)||θ)]β=1. (31)

Finally, the proof is completed by invoking Lemma 7. ∎

###### Lemma 7.

For any ,

 ∂∂α[V(τ(θ,α)||θ)]α=1=−S(θ),

where is defined in (10).

###### Proof:

By definition

 ∂∂αV(τ(θ,α)||θ)∣∣∣α=1 =∑i∈[|X|]∂∂ατi(θ,α)∣∣∣α=1(H(τ(θ,α)||θ)−log1θi)2 =∑i∈[|X|]θi(H(τ(θ,α)||θ)−log1θi)3 (32) =−S(θ), (33)

where (32) follows by invoking Lemma 8 of [7]. ∎

###### Proof:

Let us recall that for some . We can find and in the domain of each rate function such that the derivatives of the rate function are both equal to a constant . It follows from [2] that:

 t1=argt{∂∂tΛ∗θ1(t)=ρ} ⇒t1=H(τ(θ1,β)), t2=argt{1η∂∂tΛ∗θ2(ηt)=ρ} ⇒t2=1ηH(τ(θ2,β)), (34)

where . We focus on , and hence . Note that , (equivalently ) corresponds to the coinciding zeros of both rate functions. Once again recalling that the rate functions are convex, proving is equivalent to showing that (as defined in (34)) for all . This is in turn equivalent to showing:

 H(τ(θ2,β))H(θ2)1. (35)

This is equivalent to:

 ∂∂α[H(τ(θ1,αβ))H(τ(θ1,α))]α=1<0,∀β>1. (36)

It is straightforward to show that (36) is equivalent to

 ∂∂α[H(τ(θ1,αβ))]α=1H(τ(θ1,β))>∂∂α[H(τ(θ1,α))]α=1H(θ1),∀β>1. (37)

Finally, we prove the following statement that is equivalent to (37):

 (38)

This is equivalent to showing:

 ∂2∂α∂β[H(τ(θ1,αβ))]α=β=1H(θ1) < ∂∂β[H(τ(θ1,β))]β=1∂∂α[H(τ(θ1,α))]α=1. (39)

The above statement is shown to hold if and only if satisfies the SEC (Definition 5) invoking Lemmas 4 and 8, which completes the proof of the theorem. ∎

###### Lemma 8.

For all , we have

 ∂2∂α∂β[H(τ(θ,αβ))]α=β=1=−2V(θ)+S(θ). (40)
###### Proof:

Noting that and invoking Lemma 4, we have

 ∂∂α[H(τ(θ,αβ))]α=1 =−V(τ(θ,β)) (41) =−β2V(τ(θ,β)||θ), (42)

where (42) follows from Lemma 5 of [7]. Hence, by differentiating the above with respect to at and invoking Lemma 7, we arrive at the claim. ∎

###### Proof:

The theorem is proved by invoking Lemmas 46 and 50, as follows:

 H(θ)S(θ)

and hence satisfies the SEC. ∎

###### Lemma 9.

For any , we have

 H(θ)S(θ)

where .

###### Proof:

Let . First note that by Lemma 11, we have

 H(θ)<ϕlog1ϕ+ϕ. (47)

Hence,

 H(θ)S(θ) <ϕ2(1−ϕ)(1−2ϕ)log3(1−ϕϕ) +ϕ2(1−ϕ)(1−2ϕ)log3(1−ϕθ)log(1ϕ) (48) <ϕ2(1−ϕ)(1−2ϕ)log3(1−ϕϕ) +ϕ2(1−ϕ)2log4(1−ϕϕ), (49)

where (49) follows from Lemma 12, completing the proof. ∎

###### Lemma 10.

For any , we have

 H(θ)V(θ)>ϕ2(1−ϕ)(1−2ϕ)log3(1−ϕϕ), (50)

where

###### Proof:

For note that

 H(θ)>ϕlog1ϕ, (51)

and hence

 H(θ)V(θ) >ϕ2(1−ϕ)log2(1−ϕϕ)log(1ϕ) (52) >ϕ2(1−ϕ)(1−2ϕ)log3(1−ϕϕ), (53)

where (53) follows from Lemma 12, completing the proof. ∎

###### Lemma 11.

For any , we have

 (1−x)log11−x
###### Proof:

Note that as both sides are equal and the limit of their derivatives are equal as well, while the second derivative of the left hand side is equal to completing the proof. ∎

###### Lemma 12.

For any , we have

 (1−2x)log1x<(1−x)log1−xx. (55)
###### Proof:

The proof is similar to that of Lemma 11. ∎

###### Proof:

We proceed with the proof by construction. Let be such that

 θi={(1−ϵ)/(|X|−1)1≤i≤|X|−1ϵi=|X|. (56)

Then, invoking Lemma 13, we can see that as , for sufficiently small and , we have

 12log(|X|−1)< H(θ)<2log(|X|−1), (57) 12ϵ(log1ϵ)2< V(θ)<ϵ(log1ϵ)2, (58) 12ϵ(log1ϵ)3< S(θ)<ϵ(log1ϵ)3. (59)

Hence,

 S(θ)H(θ) >14ϵ(log1ϵ)3log(|X|−1) (60) >ϵ2(log1ϵ)4+4ϵ(log1ϵ)2log(|X|−1) (61) >V2(θ)+2H(θ)V(θ). (62)

where (61) holds for sufficiently small as long as . Thus, does not satisfy the SEC, and the proof is complete. ∎

###### Lemma 13.

Let be such that

 θi={(1−ϵ)/(|X|−1)1≤i≤|X|−1ϵi=|X|. (63)

Then,

 H(θ) =(1−ϵ)log(|X|−1)+h(ϵ), (64) V(θ) =ϵ(1−ϵ)(log(1−ϵϵ)−log(|X|−1))2, (65) S(θ) =ϵ(1−ϵ)(1−2ϵ)(log(1−ϵϵ)−log(|X|−1))3, (66)

where is the binary entropy function given by

 h(ϵ):=H(ϵ,1−ϵ)=ϵlog1ϵ+(1−ϵ)log11−ϵ. (67)
###### Proof:

The calculation of is straightforward by noting that this is a mixture of two uniform sources on alphabets of size and . To calculate , we have

 V(θ) =(1−ϵ)(log|X|−11−ϵ−(1−ϵ)log(|X|−1)−h(ϵ))2 +ϵ(log1ϵ−(1−ϵ)log(|X|−1)−h(ϵ))2 (68) =(1−ϵ)(ϵlog(|X|−1)+ϵlogϵ1−ϵ)2 +ϵ(−(1−ϵ)log(|X|−1)+(1−ϵ)log1−ϵϵ)2 (69) =ϵ(1−ϵ)(log1−ϵϵ−log(|X|−1))2. (70)

Finally, to calculate , similarly to the calculations for , we get

 S(θ) =(1−ϵ)(ϵlog(|X|−1)+ϵlogϵ1−ϵ)3 +ϵ(−(1−ϵ)log(|X|−1)+(1−ϵ)log1−ϵϵ)3 (71) =ϵ(1−ϵ)(1−2ϵ)(log1−ϵϵ−log(|X|−1))3, (72)

establishing the claim. ∎

###### Proof:

Let be drawn from . Further, let

 Y=log1P(X)−H(X).

Hence, by definition, and . Then, the condition in (20) would ensure that . Noting that the uniform distribution is excluded in , and hence the varentropy is nonzero, we apply Lemma 14 (with ) to obtain that

 S(θ)<2V(θ).

This is a sufficient condition for the SEC to hold, completing the proof. ∎

###### Lemma 14.

Let be a random variable supported on for some Further, let and . Then,

 E[Y3]E[Y2]≤a. (73)
###### Proof:

It is straightforward to show that is maximized if

 py(y)=⎧⎪⎨⎪⎩ρ/2,y=−a1−ρ,y=0ρ/2,y=a,

for some , which in turn leads to

###### Proof:

First we show that the condition in (21) leads to the condition in (20), which follows from the following set of inequalities:

 maxi∈[|X|]∣∣∣log1θi−H(θ)∣∣∣ ≤maxi∈[|X|]∣∣∣log1θi−log|X|∣∣∣ +|log|X|−H(θ)| (74) ≤2maxi∈[|X|]∣∣∣log1θi−log|X|∣∣∣ (75) =2, (76)

where (74) follows Jensen’s inequality and the convexity of the operator, and (76) is a direct result of (21). Hence, the claim of Lemma 20 holds, which results in the claim of the theorem. ∎

## References

• [1] J. L. Massey, “Guessing and entropy,” in Information Theory, 1994. Proceedings., 1994 IEEE International Symposium on.   IEEE, 1994, p. 204.
• [2] E. Arıkan, “An inequality on guessing and its application to sequential decoding,” Information Theory, IEEE Transactions on, vol. 42, no. 1, pp. 99–105, 1996.
• [3] D. Malone and W. G. Sullivan, “Guesswork and entropy,” IEEE Trans. Inf. Theory, vol. 50, no. 3, pp. 525–526, Mar. 2004.
• [4] C. E. Pfister and W. G. Sullivan, “Renyi entropy, guesswork moments, and large deviations,” IEEE Trans. Inf. Theory, vol. 50, no. 11, pp. 2794–2800, Nov. 2004.
• [5] M. M. Christiansen and K. R. Duffy, “Guesswork, large deviations, and shannon entropy,” Information Theory, IEEE Transactions on, vol. 59, no. 2, pp. 796–802, 2013.
• [6] A. Beirami, R. Calderbank, M. Christiansen, K. Duffy, A. Makhdoumi, and M. Médard, “A geometric perspective on guesswork,” in 53rd Annual Allerton Conference (Allerton), Oct. 2015.
• [7] A. Beirami, R. Calderbank, M. Christiansen, K. Duffy, and M. Médard, “A characterization of guesswork on swiftly tilting curves,” preprint, 2017.
• [8] R. Sundaresan, “Guessing under source uncertainty,” IEEE Trans. Inf. Theory, vol. 53, no. 1, pp. 269–287, Jan. 2007.
• [9] A. Beirami, R. Calderbank, K. Duffy, and M. Médard, “Quantifying computational security subject to source constraints, guesswork and inscrutability,” in 2015 IEEE International Symposium on Information Theory Proceedings (ISIT), Jun. 2015.
• [10] V. Strassen, “Asymptotische abschätzungen in shannons informations theorie,” in Trans. Third Prague Conf. Inf. Theory, 1962, pp. 689–723.
• [11] Y. Polyanskiy, H. V. Poor, and S. Verdú, “Channel coding rate in the finite blocklength regime,” IEEE Transactions on Information Theory, vol. 56, no. 5, pp. 2307–2359, 2010.
• [12] E. Arıkan, “Varentropy decreases under the polar transform,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3390–3400, 2016.