## 1. Introduction

Suppose Alice sends a secret bit to Bob over an open channel, in the presence of a computationally unbounded (passive) adversary Eve. Let and be the probabilities for Bob and Eve, respectively, to correctly recover . It is well known that if , then a straightforward encryption emulation attack gives as well. In this note, we address a common misconception that this can be somehow “generalized” to the case , i.e., emulating encryption (or receiver’s algorithms, or both) can give . The basic reason why this is wrong is that Eve can never fully emulate Alice or Bob since Eve’s probability space is inherently different from that of Alice or Bob. When , this does not matter because “always correct” in a probability space implies “always correct” in any probability subspace. However, if , the situation can be quite different.

In [1], we have gone where no cryptographer had gone before and suggested that it might be possible to build a public-key cryptographic protocol entirely based on probability theory, without using any algebra or number theory. This was met with skepticism (to put it mildly) based on a strong belief in impossibility of having . This skepticism has materialized in a preprint by Panny [5] who courageously delved into the depths of elementary probability theory and tried to actually compute some probabilities instead of just saying “this is impossible because this cannot possibly be possible” as most other believers in “flat Earth” do. His preprint is in two independent parts: theoretical, where he does probability computations attempting to prove , and (completely unrelated) experimental part where he offers a statistical attack on ciphertext in our protocol in [1] making Eve succeed (in recovering Alice’s secret bit) with an unspecified probability . The fact that this probability (or, rather, an experimental approximation thereof) was not specified is unfortunate since it leaves open the question of whether or not this particular attack yields for the protocol in [1].

The main purpose of this short note is to show (in Section 3) that there is no “generic” algorithm (like emulating encryption, or receiver’s algorithms, or both) for Eve to guarantee . Of course, for any particular protocol, there might be an “intelligent”, protocol-specific attack, that might give , but the question of whether or not there is always a protocol-specific attack that succeeds with probability remains open.

For the record:

We admit that in our scheme in [1], . We explain in Section 2 below why and how Eve can achieve that.

The claim for the general, “framework”, scheme in [1] still stands. In Section 2, we reproduce this framework and give some argument in support of this claim.

In Section 3, we give an explanation of why typical “proofs” of (or even of ) are flawed. Then we specifically address the decryption emulation attack, to answer a popular concern along the lines of “if Eve is computationally unbounded, she can just emulate Bob and be at least as successful as Bob is in recovering Alice’s plaintext”. Here “decryption emulation attack” is a slang for emulating all the receiver’s algorithms used in a protocol.

Section 2 also explains why in schemes like the one in [1], inherently cannot be larger than 0.75. Regretfully, this probability appears to be not large enough to be useful in any meaningful real-life scenario, as far as we can see.

Finally, we encourage curious readers to read about the famous Monty Hall problem [2], to appreciate the importance of the probability space, and not just “random coins”, in computing probabilities. A quote from Wikipedia [2]: “Paul Erdös, one of the most prolific mathematicians in history, remained unconvinced until he was shown a computer simulation…” shows that non-believers in a (sometimes crucial) role of the probability space are in a good company.

We realize that firm believers in “flat Earth” will not even read our note because it is much easier to accuse of heresy than to search for the truth, but we hope that more open-minded readers will be curious enough to find out how probability theory just a little bit beyond the first course in discrete mathematics can be used in cryptographic constructions.

## 2. How can possibly be larger than : a generic example

Let Alice be the sender of a secret bit and Bob the receiver. Suppose Alice has two disjoint probability spaces, and , to pick her encryption key from. Assume, for simplicity of the analysis, that and are public (although they are typically not) and that Alice will select between and with probability (although this probability may be private as well).

Suppose that if Alice picks her encryption key from , then Bob decrypts correctly with probability , and if she picks her encryption key from , then Bob decrypts correctly with probability . Then Bob decrypts correctly with probability . Suppose and . The latter condition implies that, in some instances, an encryption key from produces the same ciphertext as some encryption key from does. Denote by the set of these “special” encryption keys.

Let be the probability of the following event: Alice picked an encryption key from , conditioned on (Alice picked an encryption key from and Bob decrypted correctly). Why do we need this weird-looking condition? It is needed to express, in terms of , the probability for Bob to decrypt correctly in case Alice picked an encryption key from (after choosing to pick it from ). Indeed, this probability is equal to by the probability of the intersection of two events formula. The two events here are (both conditioned on Alice having picked her encryption key from ): (1) Alice picked her encryption key from ; (2) Bob decrypted correctly.

How is Eve going to decrypt? The most obvious way is to narrow down the selection of decryption key (while emulating Bob’s decryption algorithm) by assuming that Alice has picked her encryption key from (since gives Bob a better chance for success in that case). Then, Eve would emulate Bob’s algorithm in the hope that this will give her the correct decryption of Alice’s bit with probability . However, since Alice selects with probability , the actual probability for Eve to decrypt Alice’s bit correctly (if she uses this strategy) is . Here is the probability for Eve to decrypt correctly (by emulating Bob’s randomness) in case Alice selected to pick her encryption key from, and is the probability for Eve to decrypt correctly in case Alice selected (see above). Then we have:

The probability for Bob to decrypt Alice’s bit correctly is .

The probability for Eve to decrypt Alice’s bit correctly (if she uses the above strategy) is . Thus, if , then .

. This is because . Indeed, obviously one cannot have , and , or , would defy the purpose for Alice to have a separate in the first place.

Thus, this most obvious attack does not give if Alice is able to select and such that , , and . An example of such selection was given in [1]

. It is straightforward to see that other strategies (i.e., other probability distributions) for Eve to select between

and for a supposed encryption key will result in an even lower probability of success.In particular instantiations of this general idea there might be instantiation-specific statistical attacks on Bob’s public key or Alice’s ciphertext [5], but the point we are trying to make here is that, contrary to what skeptics claim, there is no “universal” (e.g. encryption/decryption emulation) attack on such a scheme that would guarantee . We will establish this more formally in the next section.

To conclude this section, we note that we were unable to find an instantiation of this general scheme where both and would be greater than , so it appears that in any instantiation of this scheme. In the instantiation offered in [1], is approximately 0.55.

## 3. Why all “proofs” of fail

Below is a short version of a typical “proof” of . In what follows, Alice is the sender of a secret plaintext and Bob the receiver who, upon decrypting Alice’s ciphertext, obtains and wants , with probability . The adversary Eve wants to recover , with probability . Our main goal in this section is to show that emulation attacks (be it emulation of encryption, or decryption, or both) cannot give in any meaningful instantiation of the general scheme from Section 2, including the one in [1]. First we briefly reproduce a typical claim, with a “proof”.

###### Proposition 1.

Let be Alice’s randomness, Bob’s randomness, and the (public) transcript of communication. Suppose conditioned on and conditioned on are independent. Let be Alice’s plaintext, the result of Bob’s decryption, and the probability of having after the communication protocol execution. Then unbounded Eve, on input , can generate a value such that .

###### Proof.

Let and . Conditioned on , Eve can sample Bob’s coins. Let denote Bob’s randomness emulated by Eve. Output the value , which is what Bob would output on input . The triples and are identically distributed. Hence the values of are identical.

∎

Below we point out some issues with this proof that show that the proof is, at the very least, incomplete if . If , the claim of the proposition is well known to be true, as established by a straightforward encryption emulation attack.

We note, in passing, that and include not only “random coins”, but also probability spaces. Random coins of Alice and Bob are, indeed, independent in any meaningful public-key communication model. Probability space of the sender, on the other hand, can be dependent on the receiver’s public key and therefore on his randomness; this happens even in some well-established schemes, e.g. in PollyCracker. This is not a serious issue though, just something to keep in mind.

Serious issue. Assume, for the sake of argument, that the claim “The triples and are identically distributed” in the above proof is correct under appropriate independence conditions. Even that, however, does not prove the claim of the proposition, which is: “Then unbounded Eve, on input , can generate value such that ”. How can Eve do that? Assume for simplicity that is just a single bit.

What the above proof suggests is basically an “encryption/decryption emulation attack”. That is, Eve generates all possible Alice’s (plaintext, ciphertext) pairs and all possible Bob’s decryption keys, with all possible randomness, that would match the public key and the protocol description. Then Eve selects all (plaintext, ciphertext, decryption key) triples that give . (Recall that is Alice’s plaintext and is the result of decrypting Alice’s ciphertext by Bob.) Some of these triples will have , while others will have . Then what? Select a triple from this pool uniformly at random (or using whatever other distribution)? Then Eve’s probability space will be very different from Bob’s, and therefore there is no reason for to be equal to with this strategy.

Thus, the above proof is at least incomplete since it does not mention any algorithm for Eve to make that choice and actually generate a value that would be equal to with probability . ∎

To be fair, our argument above only shows that there is no algorithm for Eve to achieve . But what about ? To try to achieve this, the best strategy for Eve is probably to forget about Bob’s algorithms, emulate just Alice’s encryption algorithm, create a probability distribution on the set of all possible (plaintext, ciphertext) pairs, for all possible values of Bob’s public key, and then, when given a ciphertext, select the plaintext that corresponds to it with higher probability. This basically takes us to the situation considered in Section 2: this strategy will guarantee , but is still questionable because Alice’s (private) probability space is narrower than Eve’s. To illustrate how this matters, here is a simple

###### Example 1.

[3] In a city where every family has two children, Alice and Eve walk down the street and meet Bob with a little boy in a stroller; this boy is Bob’s public key. Bob tells them that he has two children, but the older child (Bob’s private key) is at school now, and Bob suggests that Alice and Eve try to guess whether the other child is a boy or a girl. While Eve walks away building a probability distribution on the set of all possible gender pairs, Alice finds out that the boy in the stroller was born on a Tuesday. This did not give Alice any information about the other child’s gender, but it changed Alice’s probability space! Now it is not “all families with two children where the younger child is a boy” but, say, “all families with two children where the younger child is a boy and with a boy born on a Tuesday”.

The result is: Eve comes to the conclusion that the other child is a boy with probability (because the pairs (GG), (BG), (GB), (BB) are assumed to be equally likely in a random family with two children, and the fact that the boy in the stroller is the younger child narrows it down to (GB), (BB)), whereas Alice comes to the conclusion (using Bayes’ formula) that the other child is a boy with probability .

One can say that in this example, Alice got information not available to Eve (even though this information is irrelevant to Bob’s private key), and this seems to be prohibited by theoretical cryptography rules of engagement (a.k.a. Kerckhoffs’s principles). However, in actual cryptographic scenarios (including the one in [1]), Alice can “artificially” change her own probability space to her liking. Eve, of course, is aware of all possible probability space choices by Alice, but all she can do is “average out” their probability distributions, which will almost for sure result in different probability distributions on the set of (plaintext, ciphertext) pairs for Alice and for Eve; sometimes it may even reverse the preference of one plaintext (given a ciphertext) over another. This phenomenon is called Simpson’s paradox [4]: a trend can appear in several different groups of data but disappear or reverse when these groups are combined. In reference to the above example, Alice could use any information including information available also to Eve (e.g. the boy in the stroller is blonde) to (privately) narrow down her probability space, while Eve will not know which probability space Alice has chosen. Compare this to the general scheme in Section 2.

Finally, we consider the attack where Eve emulates just Bob (the receiver). If the probability distribution used by Bob to generate his public key is known to the public (which was the case in [1]), then decryption emulation attack may seem like a reasonable strategy for Eve, i.e., Eve can generate all possible Bob’s private keys, then generate all possible Bob’s public keys corresponding to each of his private keys, and then select all (private key, public key) pairs with public key matching the one actually published by Bob. This will yield a probability distribution (conditioned on ) on the set of all possible Bob’s private keys, and Eve can select one of the private keys that occurs with highest probability in this distribution. The probability might then be larger than , but this probability has little to do with since the latter probability is largely controlled by Alice. Below we show that if Eve achieves , then, in fact, provided .

Emulating Bob will result in the following success probability for Eve to recover in the case where is a single bit (assuming that and , too, can only take values 0 or 1):

All probabilities here are conditioned on . Also, we assume that, since Eve emulates just Bob, Eve’s and Alice’s randomness are independent (conditioned on ), hence the events and (conditioned on ) are independent.

Denote . Then we have:

If and , then , but we claim that is less than in this case. Indeed, is equivalent to , which is true since . The latter holds because in a scenario similar to that in Section 2, typically (in particular, in the scheme in [1]), to the same Bob’s public key, any private (decryption) key from the pool of all private keys can be associated with nonzero probability. In particular, there will be private keys that yield , as well as those that yield .

Thus, while there might be an instantiation-specific statistical attack on Bob’s public key, it will have nothing to do with emulation attack(s) suggested by the above proof of Proposition 1. This also explains why the two parts (theoretical and experimental) in [5] are completely unrelated and perhaps also why the probability in the experimental part of [5] is not specified.

## References

- [1] M. Bessonov, D. Grigoriev, V. Shpilrain, A framework for unconditionally secure public-key encryption (with possible decryption errors, in: International Congress on Mathematical Software – ICMS 2018, Lecture Notes Comp. Sc. 10931 (2018), 45–54.
- [2] Monty Hall problem, https://en.wikipedia.org/wiki/Monty_Hall_problem
- [3] Boy or Girl paradox, https://en.wikipedia.org/wiki/Boy_or_Girl_paradox
- [4] Simpson’s paradox, https://en.wikipedia.org/wiki/Simpson%27s_paradox
- [5] L. Panny, Guess what?! On the impossibility of unconditionally secure public-key encryption, preprint. https://eprint.iacr.org/2019/1228