How to Subvert Backdoored Encryption: Security Against Adversaries that Decrypt All Ciphertexts

02/21/2018 ∙ by Thibaut Horel, et al. ∙ 0

We study secure and undetectable communication in a world where governments can read all encrypted communications of citizens. We consider a world where the only permitted communication method is via a government-mandated encryption scheme, using government-mandated keys. Citizens caught trying to communicate otherwise (e.g., by encrypting strings which do not appear to be natural language plaintexts) will be arrested. The one guarantee we suppose is that the government-mandated encryption scheme is semantically secure against outsiders: a perhaps advantageous feature to secure communication against foreign entities. But what good is semantic security against an adversary that has the power to decrypt? Even in this pessimistic scenario, we show citizens can communicate securely and undetectably. Informally, there is a protocol between Alice and Bob where they exchange ciphertexts that look innocuous even to someone who knows the secret keys and thus sees the corresponding plaintexts. And yet, in the end, Alice will have transmitted her secret message to Bob. Our security definition requires indistinguishability between unmodified use of the mandated encryption scheme, and conversations using the mandated encryption scheme in a modified way for subliminal communication. Our topics may be thought to fall broadly within the realm of steganography: the science of hiding secret communication in innocent-looking messages, or cover objects. However, we deal with the non-standard setting of adversarial cover object distributions (i.e., a stronger-than-usual adversary). We leverage that our cover objects are ciphertexts of a secure encryption scheme to bypass impossibility results which we show for broader classes of steganographic schemes. We give several constructions of subliminal communication schemes based on any key exchange protocol with random messages (e.g., Diffie-Hellman).

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Suppose that we lived in a world where the government wished to read all the communications of its citizens, and thus decreed that citizens must not communicate in any way other than by using a specific, government-mandated encryption scheme with government-mandated keys. Even face-to-face communication is not allowed: in this Orwellian world, anyone who is caught speaking to another person will be arrested for treason. Similarly, anyone whose communications appear to be hiding information will be arrested: e.g., if the plaintexts encrypted using the government-mandated scheme are themselves ciphertexts of a different encryption scheme. However, the one assumption that we entertain in this paper, is that the government-mandated encryption scheme is, in fact, semantically secure: this is a tenable supposition with respect to a government that considers secure encryption to be in its interest, in order to prevent foreign powers from spying on its citizens’ communications.

A natural question then arises: is there any way that the citizens would be able to communicate in a fashion undetectable to the government, based only on the semantic security of the government-mandated encryption scheme, and despite the fact that the government knows the keys and has the ability to decrypt all ciphertexts?111We note that one could, alternatively, consider an adversary with decryption capabilities arising from possession of some sort of “backdoor.” For the purposes of this paper, we opted for the simpler and still sufficiently expressive model where the adversary’s decryption power comes from knowledge of all the decryption keys. What can semantic security possibly guarantee in a setting where the adversary has the private keys?

This question may appear to fall broadly within the realm of steganography: the science of hiding secret communications within other innocent-looking communications (called “cover objects”), in an undetectable way. Indeed, it can be shown that if two parties have a shared secret, then based on slight variants of existing techniques for secret-key steganography, they can conduct communications hidden from the government.222We refer the reader to Section 1.3 for more details.

However, the question of whether two parties who have never met before can conduct hidden communications is more interesting. This is related to the questions of public-key steganography and steganographic key exchange which were both first formalized by von Ahn and Hopper [vAH04]. Public-key steganography is inadequate in our setting since exchanging or publishing public keys is potentially conspicuous and thus is not an option in our setting. All prior constructions of steganographic key exchange require the initial sampling of a public random string that serves as a public parameter of the steganographic scheme. Intuitively, in these constructions, the public random string can be thought to serve the purpose of selecting a specific steganographic scheme from a family of schemes after the adversary has chosen a strategy. That is, the schemes crucially assume that the adversary (the dystopian government, in our story above) cannot choose its covertext distribution as a function of the public parameter.

It is conservative and realistic to expect a malicious adversary to choose the covertext distribution after the honest parties have decided on their communication protocol (including the public parameters). After all, malice never sleeps [Mic16]. Alas, we show that if the covertext distribution is allowed to depend on the communication protocol, steganographic communication is impossible. In other words, for every purported steganographic communication protocol, there is a covertext distribution (even one with high min-entropy) relative to which the communication protocol fails to embed subliminal messages. The relatively simple counterexample we construct is inspired by the impossibility of deterministic extraction.

Semantic Security to the Rescue? However, this impossibility result does not directly apply to our setting, as our covertext distribution is restricted to be a sequence of ciphertexts (that may encrypt arbitrary messages). Moreover, the ciphertexts are semantically secure against entities that are not privy to the private keys. We define the notion of a subliminal communication scheme (Definition 3.1) as a steganographic communication scheme where security holds relative to covertext distributions that are guaranteed to be ciphertexts of some semantically secure encryption scheme. Is there a way to use semantic security to enable subliminal communication?

Our first answer to this question is negative. In particular, consider the following natural construction: first, design an extractor function ; then, to subliminally transmit a message bit , sample encryptions of a (even adversarially prescribed) plaintext using independent randomness every time, until . There are two reasons this idea does not work. First, if the plaintext bit is not random, the adversary can detect this fact by simply applying the extractor function to the transmitted covertext. Second, the government can pick an adversarial (semantically secure) encryption scheme where the extractor function is constant on all ciphertexts; this is again similar to the impossibility of deterministic extraction.

Nevertheless, we show how to circumvent these difficulties and use the semantic security of the underlying (adversarial) encryption scheme and construct a subliminal communication scheme.

Theorem 1.1 (Informal version of Theorem 5.1).

Under the decisional Diffie-Hellman (DDH) assumption—or any other assumption that gives rise to a key exchange protocol with messages indistinguishable from random—there is a subliminal communication scheme which allows the transmission of many bits per ciphertext after a setup phase of ciphertexts ( is the security parameter).

We then show how to improve our first construction to reduce the length of the setup phase under additional assumptions.

1.1 Overview of Our Construction

The first idea in our construction is implicit in essentially all the works in steganography starting from [Sim83]: namely, to achieve subliminal communication of arbitrary messages, it is sufficient to be able to undetectably communicate uniformly randomly distributed strings of one’s choice. In other words, Alice samples a string which is randomly distributed, produces some ciphertext(s) to be sent to Bob, such that Bob is able to learn from them, and yet a PPT eavesdropper Eve who sees the entire communication transcript cannot distinguish between the following two cases:

  1. Alice is indeed sending (hereafter, “embedding”) random strings to Bob, or

  2. Alice is producing ciphertexts using the unmodified government-mandated encryption algorithm, without embedding such random strings.

To be more precise, the indistinguishability requirement holds for any given (adversarially specified) distribution of message sequences that Alice may choose to encrypt using the government-mandated encryption scheme. Notice that this does not preclude that Eve may be able to learn and indeed, our constructions do allow an eavesdropper to learn the embedded strings. Given the ability to undetectably communicated randomly distributed strings, Alice and Bob can then embed to each other the messages of a key-exchange protocol with randomly distributed messages (such as Diffie-Hellman) to establish a shared secret, and then embed to each other ciphertexts of a secret-key encryption scheme with pseudorandom ciphertexts, using the established secret as the key.

All known constructions of such undetectable random string embedding rely on the sampling of a public random seed after the adversarial strategy is fixed. In this paper, however, we are interested in bootstrapping hidden communications from the very ground up, and we are not willing to assume that the parties start from a state where such a seed is already present.

We observe that the ability to embed randomly distributed strings of one’s choice — rather than, e.g., to apply a deterministic function to ciphertexts of the government-mandated encryption scheme, and thereby obtain randomly distributed strings which the creator of the ciphertexts did not choose — is crucial to the above-outlined scheme. The notion of undetectably embedding exogenous random strings — i.e., strings that are randomly distributed outside of Alice’s control, but both Alice and Bob can read them — is seemingly much weaker, and certainly cannot be used to embed key exchange messages or secret-key ciphertexts. However, we observe that this weaker primitive turns out to be achievable, for our specific setting, without the troublesome starting assumption of a public random seed. We identify a method for embedding exogenous random strings into ciphertexts of an adversarially chosen encryption scheme (interestingly, our method does not generalize to embedding into arbitrary min-entropy distributions). We then exploit this method to allow the communicating parties to establish a random seed — from which point they can proceed to embed random strings of their choice, as described above.

In building this weaker primitive, in order to bypass our earlier-described impossibility result, we extract from two ciphertexts at a time, instead of one. We begin with the following simple idea: for each consecutive pair of ciphertexts and , a single hidden (random) bit is defined by where is some two-source extractor. It is initially unclear why this should work because (1) and are encryptions of messages and which are potentially dependent, and two-source extractors are not guaranteed to work without independence; and (2) even if this difficulty could be overcome, ciphertexts of semantically secure encryption scheme can have min-entropy as small as (where is the security parameter) and no two-source extractor known to this day can extract from such a small min-entropy.

We overcome difficulty (1) by relying on the semantic security of the ciphertexts of the adversarially chosen encryption scheme. Paradoxically, even though the adversary knows the decryption key, we exploit the fact that semantic security still holds against the extractor, which does not have the decryption key. The inputs in our case are ciphertexts which are not necessarily independent, but semantic security implies that they are computationally indistinguishable from being independent. Thus, the output of is pseudorandom. Indeed, when outputs a single bit (as in our construction), the output is also statistically close to random. The crucial point here is that the semantic security of the encryption scheme is used not against the government, but rather against the extraction function .

Our next observation, to address difficulty (2), is that the ciphertexts are not only computationally independent, but they are also computationally indistinguishable from i.i.d. In particular, each pair of ciphertexts is indistinguishable from a pair of encryptions of , by semantic security. Based on this observation, we can use a very simple “extractor”, namely, the greater-than function . In fact, is an extractor with two input sources, whose output bit has negligible bias when the sources have min-entropy and are independently and identically distributed (this appears to be a folklore observation; see, e.g., [BIW04]). Because of the last condition, is not a true two-source extractor according to standard definitions, but is still suitable for our setting.

By repeatedly extracting random bits from pairs of consecutive ciphertexts using , Alice and Bob can construct a shared random string . Note that in this process, Alice and Bob generate ciphertexts using the unmodified government-mandated encryption scheme, so the indistinguishability requirement clearly holds. We stress again that is also known to a passive eavesdropper of the communication. This part of our construction, up to the construction of the string , is presented in details in Section 5.1. From there, constructing a subliminal communication scheme is not hard: Alice and Bob use as the seed of a strong seeded extractor to subliminally communicate random strings of their choice as explained in Section 5.2. The complete description of our protocol is given in Section 5.3.

1.2 Improved Constructions for Specific Cases

While our first construction has the advantage of simplicity, the initial phase to agree on shared random string (using the function) transmits only one hidden bit per ciphertext of the government-mandated encryption scheme. A natural question is whether this rate of transmission can be improved. We show that if the government-mandated encryption scheme is succinct in the sense that the ciphertext expansion factor is at most , then it is possible to improve the rate of transmission in this phase to hidden bits per ciphertext using an alternative construction based on the extractor from [DEOR04]. In other words, our first result showed that if the government-mandated encryption scheme is semantically secure, we can use it to communicate subliminally; the second result shows that if the government-mandated encryption scheme is efficient, that is even better for us, in the sense that it can be used for more efficient subliminal communication.

Theorem 1.2 (Informal version of Theorem 6.1).

If there is a secure key exchange protocol whose message distribution is pseudorandom, then there is a subliminal communication scheme in which a shared seed is established in two exchanges of ciphertexts of a succinct encryption scheme.

Theorem 1.1 exploited the specific nature of the cover object distribution in our setting (specifically, that a sequence of encryptions of arbitrary messages is indistinguishable from an i.i.d. sequence of encryptions of zero). Theorem 1.2 exploits an additional consequence of the semantic security of the government-mandated encryption scheme: if it is succinct, then ciphertexts are computationally indistinguishable from sources of high min-entropy (i.e., they have large HILL-entropy).

It may be possible to use more advanced two-source extractors to work with a larger class of government-mandated encryption schemes (with larger expansion factors); however, the best known such extractors have an inverse polynomial error rate [CZ16] (whereas our construction’s extractor has negligible error). Consequently, designing a subliminal communication protocol using these extractors seems to require additional ideas, and we leave this as an open problem.

Finally, we show yet another approach in cases where the distribution of “innocent” messages to be encrypted under the government-mandated encryption scheme has a certain amount of conditional min-entropy. For such cases, we construct an alternative scheme that leverages the semantic security of the encryption scheme in a rather different way: namely, the key fact for this alternative construction is that (in the absence of a decryption key) a ciphertext appears independent of the message it encrypts. In this case, running a two-source extractor on the message and the ciphertext works. The resulting improvement in the efficiency of the scheme is comparable to that of Theorem 1.2.

Theorem 1.3 (Informal version of Theorem 6.2).

If there is a secure key exchange protocol whose message distribution is pseudorandom, then there is a subliminal communication scheme:

  • for any cover distribution consisting of ciphertexts of a semantically secure encryption scheme, if the innocent message distribution has conditional min-entropy rate , or

  • for any cover distribution consisting of ciphertexts of a semantically secure and succinct encryption scheme, if the innocent message distribution has conditional min-entropy .

In both cases, the shared seed is established during the setup phase in only two exchanges of ciphertexts.

We conclude this introductory section with some discussion of our results in a wider context.

On Our Modeling Assumptions. Our model considers a relatively powerful adversary that, for example, has the ability to choose the encryption scheme using which all parties must communicate, and to decrypt all such communications. We believe that this can be very realistic in certain scenarios, but it is also important to note the limitations that our model places on the adversary.

The most obvious limitation is that the encryption scheme chosen by the adversary must be semantically secure (against third parties that do not have the ability to decrypt). Another assumption is that citizens are able to run algorithms of their choice on their own computers without, for instance, having every computational step monitored by the government. Moreover, citizens may use encryption randomness of their choice when producing ciphertexts of the government-mandated encryption scheme: in fact, this is a key fact that our construction exploits. Interestingly, secrecy of the encryption randomness from the adversary is irrelevant: after all, the adversary can always choose an encryption scheme where the encryption randomness is recoverable given the decryption key. Despite this, the ability of the encryptor to choose the randomness to input to the encryption algorithm can be exploited—as by our construction—to allow for subliminal communication.

The Meaning of Semantic Security when the Adversary Can Decrypt. In an alternate light, our work may be viewed as asking the question: what guarantee, if any, does semantic security provide against adversary in possession of the decryption key? Our results find, perhaps surprisingly, that some meaningful guarantee is still provided by semantic security even against an adversary is able to decrypt: more specifically, that any communication channel allowing transmission of ciphertexts can be leveraged to allow for undetectable communications between two parties that have never met. From this perspective, our work may be viewed as the latest in a scattered series of recent works that consider what guarantees can be provided by cryptographic primitives that are somehow “compromised”—examples of recent works in this general flavor are cited in Section 1.3 below.

Concrete Security Parameters. From a more practical perspective, it may be relevant to consider that the government in our hypothetical Orwellian scenario would be incentivized to opt for an encryption scheme with the least possible security level so as to ensure security against foreign powers. In cases where the government considers itself to have more computational power than foreign adversaries (perhaps by a constant factor), this could create an interesting situation where the security parameter with which the government-mandated scheme must be instantiated is below what is necessary to ensure security against the government’s own computational power.

Such a situation could be risky for citizens’ hidden communications: intuitively, our constructions guarantee indistinguishability against the citizens’ own government between an “innocent” encrypted conversation and one which is carrying hidden subliminal messages. However, the distinguishing advantage in this indistinguishability game depends on the security parameter of the government-mandated encryption scheme. Thus, it could be that the two distributions are far enough apart for the citizens’ own government to distinguish (though not for foreign governments to distinguish). We observe that citizens cognizant of this situation can further reduce the distinguishing advantage beyond that provided by our basic construction, using the standard technique of amplifying the proximity of a distribution (which is far from random) to uniformly random, by taking the XOR of several samples from the far-from-random distribution.

Having outlined this potential concern and solution, in the rest of the paper we will disregard these issues in the interest of clarity of exposition, and present a purely asymptotic analysis.

Open Problems. Our work suggests a number of open problems. A natural one is the extent to which the modeling assumptions that this work makes — such as the ability of honest encryptors to use true randomness for encryption — can be relaxed or removed, while preserving the ability to communicate subliminally. For example, one could imagine yet another alternate universe, in which the hypothetical Orwellian government not only mandates that citizens use the prescribed encryption scheme, but also that their encryption randomness must be derived from a specific government-mandated pseudorandom generator.

The other open problems raised by our work are of a more technical nature and better understood in the context of the specific details of our constructions; for this reason we defer their discussion to Section 7.

1.3 Other Related Work

The scientific study of steganography was initiated by Simmons more than thirty years ago [Sim83], and is the earliest mention of the term “subliminal channel” referring to the conveyance of information in a cryptosystem’s output in a way that is different from the intended output,333This phrasing is loosely borrowed from [YY97]. of which we are aware. Subsequent works such as [Cac98, Mit99, ZFK98] initially explored information-theoretic treatments of steganography, and then Hopper, Langford, and von Ahn [HLv02] gave the first complexity-theoretic (secret-key) treatment almost two decades later. Public-key variants of steganographic notions—namely, public-key steganography and steganographic key exchange—were first defined by [vAH04]. There is very little subsequent literature on public-key steganographic primitives; one notable example is by Backes and Cachin [BC05], which considers public-key steganography against active attacks (their attack model, which is stronger than that of [vAH04], was also considered in [HLv02] but had never been applied to the public-key setting).

The alternative perspective of our work as addressing the question of whether any sort of secret communication can be achieved via transmission of ciphertexts of an adversarially designed cryptosystem alone fits into a scattered series of recent works that consider what guarantees can or cannot be provided by compromised cryptographic primitives. For example, Goldreich [Gol11], and later, Cohen and Klein [CK16], consider what unpredictability guarantee is achieved by the classic GGM construction [GGM86] when the traditionally secret seed is known; Austrin et al. [ACM14] study whether certain cryptographic primitives can be secure even in the presence of an adversary that has limited ability to tamper with honest parties’ randomness; Dodis et al. [DGG15] consider what cryptographic primitives can be built based on backdoored pseudorandom generators; and Bellare, Jaeger, and Kane [BJK15] present attacks that work against any symmetric-key encryption scheme, that completely compromise security by undetectably corrupting the algorithms of the encryption scheme (such attacks might, for example, be feasible if an adversary could generate a bad version of a widely used cryptographic library and install it on his target’s computer).

The last work mentioned above, [BJK15], is actually part of the broader field of kleptography, originally introduced by Young and Yung [YY97, YY96b, YY96a], which is also relevant context for the present work. Broadly speaking, a kleptographic attack “uses cryptography against cryptography” [YY97]i.e., changes the behavior of a cryptographic system in a fashion undetectable to an honest user with black-box access to the cryptosystem, such that the use of the modified system leaks some secret information (e.g., plaintexts or key material) to the attacker who performed the modification. An example of such an attack might be to modify the key generation algorithm of an encryption scheme such that an adversary in possession of a “back door” can derive the private key from the public key, yet an honest user finds the generated key pairs to be indistinguishable from correctly produced ones. Kleptography has enjoyed renewed research activity since [BPR14] introduced a formal model of a specific type of kleptographic attack called algorithm substitution attacks (ASAs), motivated by recent revelations suggesting that intelligence agencies have successfully implemented attacks of this nature at scale. Recently, [BL17] formalized an equivalence between certain variants of ASA and steganography.

Our setting differs significantly from kleptography in that the encryption algorithms are public and not tampered with (i.e., adhere to a purported specification), and in fact may be known to be designed by an adversarial party.

2 Preliminaries

Notation.

is the security parameter throughout. PPT means “probabilistic polynomial time.” denotes the set . is a uniform variable over , independent of every other variable in this paper. We write to express that and are identically distributed. Given two variables and over , we denote by the statistical distance defined by:

For a random variable

, we define the min-entropy of by

. The collision probability is

.

2.1 Encryption and Key Exchange

We assume familiarity with the standard notions of semantically secure public-key and private-key encryption, and key exchange. This subsection defines notation and additional terminology.

Public-Key Encryption. We use the notation for the public-key encryption scheme mandated by the adversary.

Secret-key Encryption. We write to denote a secret-key encryption scheme. We define a pseudorandom secret-key encryption scheme to be a secret-key encryption scheme whose ciphertexts are indistinguishable from random. It is a standard result that pseudorandom secret-key encryption schemes can be built from one-way functions.

Key Exchange. We define a pseudorandom key-exchange protocol to be a key-exchange protocol whose transcripts are distributed indistinguishably from random messages. Recall that the standard security guarantee for key-exchange protocols requires that , where is a key-exchange protocol transcript, is the shared key established in , and is a random unrelated key. A pseudorandom key-exchange protocol instead requires that where

is the uniform distribution over strings of the appropriate length.

Most known key agreement protocols are pseudorandom; in fact, most have truly random messages. This is the case, for example, for the classical protocol of Diffie and Hellman [DH76].

2.2 Extractors

We will need the following definitions of two-source and seeded extractors.

Definition 2.1.

The family is a two-source extractor if for all and for all pairs of independent random variables over such that and , it holds that:

(1)

We say that is strong w.r.t. the first input if it satisfies the following stronger property:

A strong two-source extractor w.r.t. the second input is defined analogously. Finally, we say that is a same-source extractor if and (1) is only required to hold when is a pair of i.i.d. random variables with .

Definition 2.2.

The family is a seeded extractor if for all and any random variable over such that , it holds that:

We say moreover that is strong if it satisfies the following stronger property:

3 Subliminal Communication

Conversation Model. The protocols we will construct take place over a communication between two parties and alternatingly sending each other ciphertexts of a public-key encryption scheme. W.l.o.g., we assume that initiates the communication, and that communication occurs over a sequence of exchange-rounds each of which comprises two sequential messages: in each exchange-round, one party sends a message to and then sends a message to . Let denote the plaintext message sent by to in exchange-round , and let denote the pair of messages exchanged. For , let us denote by

the plaintext transcripts available to and respectively during exchange-round , in the case when sends the first message in exchange-round .444If instead spoke first in round , then would contain , and would not contain . We define and to be empty lists (i.e., empty starting transcripts). (Note that when a notation contains both types of subscripts, we write the subscripts denoting the party and round in blue and red respectively, to improve readability.)

Recall that our adversary has the power to decrypt all ciphertexts under its chosen public-key encryption scheme . Intuitively, it is therefore important that the plaintext conversation between and appears innocuous (and does not, for example, consist of ciphertexts of another encryption scheme). To model this, we assume the existence of a next-message distribution , which outputs a next innocuous message given the transcript of the plaintext conversation so far. This is denoted by .

Remark 1.

We emphasize that our main results make no assumptions at all on the distribution , and require only that the parties have oracle access to their own next-message distributions. Our main results hold in the presence of arbitrary message distributions: for example, they hold even in the seemingly inauspicious case when is constant, meaning the parties are restricted to repeatedly exchanging a fixed message.

In Section 6, we discuss other more efficient constructions that can be used in settings where a stronger assumption — namely, that has a certain amount of min-entropy — is acceptable. This stronger assumption, while not without loss of generality, might be rather benign in certain contexts (for example, if the messages exchanged are images).

In all the protocols we consider, the symbol is used to denote internal state kept locally by and . It is implicitly assumed that each party’s state contains an up-to-date transcript of all messages received during the protocol. Parties may additionally keep other information in their internal state, as a function of the local computations they perform. For , denotes the state of at the conclusion of exchange-round . Initial states are empty.

We begin with a simpler definition that only syntactically allows for the transmission of a single message (Definition 3.1). This both serves as a warm-up to the multi-message definition presented next (Definition 3.3), and will be used in its own right to prove impossibility results. See Remark 2 for further discussion of the relationship between these two definitions.

Definition 3.1.

A subliminal communication scheme is a two-party protocol:

where is the number of exchange-rounds and each is a PPT algorithm with oracle access to the algorithms of a public-key encryption scheme . Party is assumed to receive as input a message (of at least one bit) that is to be conveyed to in an undetectable fashion. The algorithms are used by in round , respectively, and denotes the algorithm run by to produce an output at the end of the protocol.

A subliminal communication scheme must satisfy the following syntax, correctness and security guarantees.

  • Syntax. In each exchange-round :

    performs the following steps:

    1. Sample “innocuous message” .

    2. Generate ciphertext and state .

    3. Locally store and send to .

    Then, performs the following steps:555Note that the steps executed by and are entirely symmetric except in the following two aspects: first, ’s input is present in step 2 but not in step 2; and secondly, the state used in step 2 contains the round- message , whereas the state used in step 2 depends only on the transcript until round .

    1. Sample “innocuous message” .

    2. Generate ciphertext and state .

    3. Locally store and send to .

    After rounds, computes and halts.

  • Correctness. For any , if and play honestly, then with probability . The probability is taken over the key generation and the randomness of the protocol algorithms, as well as the message distribution .

  • Subliminal Indistinguishability. For any semantically secure public-key encryption scheme , any and any next-message distribution , for , the following distributions are computationally indistinguishable:

    : :
    for :   output for :   output

    If the subliminal indistinguishability requirement is satisfied only for next-message distributions in a restricted set , rather than for any , then is said to be a subliminal communication scheme for .

Definition 3.2.

The rate of a subliminal communication protocol is defined as , where is defined as in Definition 3.1.666The factor of two comes from the fact that each exchange-round contains two messages. This is the average number of bits which are subliminally communicated per ciphertext of .

For simplicity, Definition 3.1 presents a communication scheme in which only a single hidden message is transmitted. More generally, it is desirable to transmit multiple messages, and bidirectionally, and perhaps in an adaptive manner.777That is, the messages to be transmitted may become known as the protocol progresses, rather than all being known at the outset. This is the case, for example, if future messages depend on responses to previous ones. In multi-message schemes, it may be beneficial for efficiency that the protocol have a two-phase structure where some initial preprocessing is done in the first phase, and then the second phase can thereafter be invoked many times to transmit different hidden messages.888As a concrete example: consider a simple protocol for transmitting a single encrypted message, consisting of key exchange followed by the transmission of message encrypted under the established key. When adapting this protocol to support multiple messages, it is beneficial to split the protocol into a one-time “phase 1” consisting of key exchange, and a “phase 2” encompassing the ciphertext transmission which can be invoked many times on different messages using the same phase-1 key. Such a protocol has much better amortized efficiency than simply repeating the single-message protocol many times, i.e., establishing a new key for each ciphertext. This will a useful notion later in the paper, for our constructions, so we give the definition of a multi-message scheme here.

Definition 3.3.

A multi-message subliminal communication scheme is a two-party protocol defined by a pair where (“Setup Phase”) and (“Communication Phase”) each define a two-party protocol. Each party outputs a state at the end of , which it uses as an input in each subsequent invocation of . An execution of a multi-message subliminal communication scheme consists of an execution of followed by one or more executions of . More formally:

where are the number of exchange-rounds in and respectively. and where each is a PPT algorithm with oracle access to the algorithms of a public-key encryption scheme . The protocol must satisfy the following syntax, correctness and security guarantees.

  • Syntax. In each exchange-round of : executes the following steps for , and then executes the same steps for .

    1. Sample “innocuous message” .

    2. Generate ciphertext and state .

    3. Locally store and send to .

    After the completion of , either party may initiate by sending a first message of the protocol (with respect to a message to be steganographically hidden, known to the initiating party). Let and denote the initiating and non-initiating parties in an execution of , respectively.999Subscripts stand for “sender” and “receiver,” respectively. Let be the hidden message that is to transmit to in an undetectable fashion during an execution of .

    The execution of proceeds as follows over exchange-rounds :

    • acts as follows:

      1. Sample .

      2. Generate .

      3. Locally store and send to .

    • acts as follows:

      1. Sample .

      2. Generate .

      3. Locally store and send to .

    At the end of an execution of , computes .

  • Correctness. For any , if and execute honestly, then for every execution of , the transmitted and received messages and are equal with overwhelming probability. The probability is taken over the key generation and the randomness of the protocol algorithms, as well as the message distribution .

  • Subliminal Indistinguishability. For any semantically secure public-key encryption scheme , any polynomial , any sequence of hidden messages , any sequence of bits and any next-message distribution , for , the following distributions are computationally indistinguishable:

    : :
    for :   output:   for :   for :  let and  for :   let             output:  

    If the subliminal indistinguishability requirement is satisfied only for in a restricted set , rather than for any , then is said to be a multi-message subliminal communication scheme for .

Definition 3.4.

The asymptotic rate of a multi-message subliminal communication protocol is defined as , where is defined as in Definition 3.3. The asymptotic rate is the average number of bits which are subliminally communicated per ciphertext exchanged between and after the one-time setup phase is completed.

Definition 3.5.

The setup cost of a multi-message subliminal communication protocol is defined as , i.e., the number of rounds in . The setup cost is the number of ciphertexts which must be sent back and forth between and in order to complete the setup phase.

Remark 2.

Definition 3.3 is equivalent to Definition 3.1 in the sense that the existence of any single-message scheme trivially implies a multi-message scheme and vice versa. We present Definition 3.3 as it will be useful for presenting and analyzing asymptotic efficiency of our constructions, but note that this equivalence means that the simpler Definition 3.1 suffices in the context of impossibility (or possibility) results, such as that given in Section 4.

4 Impossibility Results

4.1 Locally Decodable Subliminal Communication Schemes

A first attempt at achieving subliminal communication might consider schemes with the following natural property: the receiving party extracts hidden bits one ciphertext at a time, by the application of a single (possibly randomized) decoding function. We refer to such schemes as locally decodable and our next impossibility theorem shows that non-trivial locally decodable schemes do not exist if the encryption scheme is chosen adversarially.

Theorem 4.1.

For any locally decodable protocol satisfying the syntax of a single-message101010Remark 2 discusses the sufficiency of proving impossibility for single-message schemes. subliminal communication scheme, there exists a semantically secure public-key encryption scheme dependent on the public randomness of , such that violates the correctness condition of Definition 3.1. Therefore, no locally decodable protocol is a subliminal communication scheme.

Proof.

Let us consider a locally decodable scheme such as in the statement of the theorem, and let us denote by the decoding function of the scheme where the second input consists of random bits (the public randomness) and the first input is a ciphertext . Since we allow the encryption scheme to depend on the public randomness of the subliminal scheme, define the partial function . is now a deterministic function of the ciphertext and we conclude the proof by constructing an encryption scheme which biases the output of arbitrarily close to a constant bit. This is a contradiction, since by correctness and subliminal indistinguishability, should have negligible bias when subliminally communicating a uniformly random message .

Let be a semantically secure encryption scheme with ciphertext space and message space . Without loss of generality we assume that for at least half the messages , we have (otherwise we can just replace 1 by 0 in the construction below). We now define the encryption scheme which is identical to except for which on input runs as follows for some constant .

  1. Repeat at most times:

    1. Sample encryption .

    2. If , exit the loop; otherwise, continue.

  2. Output .

It is clear that is also semantically secure: oracle access to can be simulated with oracle access to , so a distinguisher which breaks the semantic security of can also be used to break the semantic security of . Finally, for a message such that , by definition of , it holds that

This shows that the output of can be arbitrarily biased and concludes the proof. ∎

Remark 3.

The essence of the above theorem is the impossibility of deterministic extraction: no single deterministic function can deterministically extract from ciphertexts of arbitrary encryption schemes. The way to bypass this impossibility is to have the extractor depend on the encryption scheme. Note that multiple-source extraction, which is used in our constructions in the subsequent sections, implicitly do depend on the underlying encryption scheme, since the additional sources of input depend on the encryption scheme and thus can be thought of as “auxiliary input” that is specific to the encryption scheme at hand.

4.2 Steganography for Adversarial Cover Distributions

Our second impossibility result concerns a much more general class of communication schemes, which we call steganographic communication schemes. Subliminal communication schemes, as well as the existing notions of public-key steganography and steganographic key exchange from the steganography literature, are instantiations of the more general definition of a (multi-message) steganographic communication scheme. To our knowledge, the general notion of a steganographic communication scheme has not been formalized in this way in prior work. In the context of this work, the general definition is helpful for proving broad impossibilities across multiple types of steganographic schemes.

As mentioned in the introduction, a limitation of all existing results in the steganographic literature, to our knowledge, is that they assume that the cover distributioni.e., the distribution of innocuous objects in which steganographic communication is to be embedded — is fixed a priori. In particular, the cover distribution is assumed not to depend on the description of the steganographic communication scheme. The impossibility result given in Section 4.1 is an example illustrative of the power of adversarially choosing the cover distribution: Theorem 4.1 says that by choosing the encryption scheme to depend on a given subliminal communication scheme, an adversary can rule out the possibility of any hidden communication at all.

Our next impossibility result (Theorem 4.2) shows that if the cover distribution is chosen adversarially, then non-trivial steganographic communication is impossible.

Theorem 4.2.

Let be a steganographic communication scheme. Then for any , there exists a cover distribution of conditional min-entropy such that the steganographic indistinguishability of does not hold for more than one message.

In Appendix A, we give the formal definition of a steganographic communication scheme, along with the proof of Theorem 4.2. We have elected to present these in the appendix as the definition introduces a set of new notation only used for the corresponding impossibility result, and both the definition and the impossibility result are somewhat tangential to the main results of this work, whose focus is on subliminal communication schemes.

5 Construction of the Subliminal Scheme

The goal of this section is to establish the following theorem, which states that our construction is a subliminal communication scheme when instantiated with a pseudorandom key-exchange protocol (such as Diffie-Hellman).

Theorem 5.1.

The protocol given in Definition 5.13, when instantiated with a pseudorandom key-exchange protocol , is a multi-message subliminal communication scheme.

The detailed description and proofs of security and correctness of our scheme can be found in the following subsections. Our construction makes no assumption on the message distribution and in particular holds when the exchanged plaintexts (of the adversarially mandated encryption scheme ) are a fixed, adversarially chosen sequence of messages. An informal outline of the construction is given next.

Definition 5.2.

Outline of the construction.

  1. Setup Phase

    1. A -bit string is established between and by extracting randomness from pairs of consecutive ciphertexts. (Protocol overview in Section 5.1.)

    2. Let be a strong seeded extractor, and let serve as its seed. By rejection-sampling ciphertexts until , either party can embed a random string of their choice in the conversation. (Protocol overview in Section 5.2.) By embedding in this manner the messages of a pseudorandom key-exchange protocol, both parties establish a shared secret .111111Note that the random string is known to an eavesdropper who has knowledge of the seed . Nonetheless, (1) the established secret is unknown to the eavesdropper by the security of the key-exchange protocol and (2) the transcript is indistinguishable to the eavesdropper from one in which no key exchange occurred at all, due to the pseudorandomness of the key-exchange messages.

  2. Communication Phase
    Both parties can now communicate arbitrary messages of their choice by (1) encrypting them using a pseudorandom secret-key encryption scheme using as the secret key, and (2) embedding the ciphertexts of using the rejection-sampling technique described in Step 1b.121212Again, an eavesdropper could know the ciphertexts exchanged, if he knew the seed , but could not distinguish the ciphertexts from truly random strings, and thus could not tell whether any subliminal communication was occuring at all. Cf. footnote 11. (Detailed protocol in Section 5.3.)

The full protocol is given, and proven to be a subliminal communication scheme, in Section 5.3.

5.1 Establishing a Shared Seed

In this section, we give a protocol which allows and to establish a random public parameter which will be used in subsequent phases of our subliminal scheme. As such, this can be thought of as drawing a subliminal scheme at random from a family of subliminal schemes. The parameter is public in the sense that anyone eavesdropping on the channel between and gains knowledge of it. A crucial point is that the random draw occurs after the adversarial encryption scheme is fixed, thus bypassing the impossibility results of Section 4.

Our strategy is simple: extract randomness from pairs of ciphertexts. Since the extractor does not receive the key, semantic security holds with respect to the extractor: a pair of ciphertexts for two arbitrary messages is indistinguishable from two encryptions of a fixed message; thus, a same-source extractor suffices for our purposes (see Lemma 5.6). Even though semantic security guarantees only min-entropy of ciphertexts (see Lemma 5.4), we will be able to make use of the “greater-than” extractor (Definition 5.3) applied to pairs of ciphertexts, and obtain Theorem 5.8.

Definition 5.3.

The greater-than extractor is defined by .

Lemma 5.4 (Ciphertexts have super-logarithmic min-entropy).

Let be a semantically secure encryption scheme. Then there exists a negligible function such that for all , , writing :

Proof.

In Appendix B. ∎

Given that ciphertexts of semantically secure encryption schemes have min-entropy , we will consider extractors which have negligible bias on such sources. This motivates the following definition.

Definition 5.5.

Let be a two-source extractor, we say that is an extractor for super-logarithmic min-entropy if is a extractor for any . In particular, for any negligible function , there exists a negligible function such that is a extractor.

The following lemma shows that the output of a same-source extractor for super-logarithmic min-entropy on two ciphertexts is statistically indistinguishable from uniform, even in the presence of the key.

Lemma 5.6.

Let be a semantically secure encryption scheme with ciphertext length , and let be a same-source extractor for super-logarithmic min-entropy with , then there exists a negligible function such that, for any , , writing , , :

Proof.

We will prove that for any polynomial and for large enough :

Assume by contradiction that there exists and an infinite set such that for :

(2)

We now construct an adversary distinguishing between and with non-negligible advantage. On input , runs as follows:

  1. Sample two encryptions of : , and .

  2. If output 1, otherwise output 0.

First, note that on input , outputs 1 iff a collision occurs at step 2. By (2), with probability at least over the draw of the , a collision occurs with probability at least . Otherwise, a collision occurs with probability at least . Overall, for :

(3)

By Lemma 5.4, after conditioning on the event that , the guarantee of applies to a pair of independent encryptions of under and we obtain, for large enough :

This implies that for large enough :

(4)

Together, (3) and (4) imply, after choosing , that for large enough :

This contradicts the security of and concludes the proof. ∎

Finally, we observe that the “greater-than” extractor is a same-source extractor for super-logarithmic min-entropy (Lemma 5.7). To the best of our knowledge, this is a folklore fact which is for example mentioned in [BIW04].

Lemma 5.7.

For any , is a same-source extractor.

Proof.

In Appendix C. ∎

We now conclude this section with a full description of our method for establishing the public parameter introduced in Step 1a.

Theorem 5.8.

Let be a semantically secure public-key encryption scheme and let . Define random variables as follows.

  • For , let .

  • For and , let representing the ciphertexts exchanged between and during exchange-rounds.

  • Let .

There exists a negligible function such that:

Proof.

Writing , we have:

where the first inequality follows by independence of the ciphertexts conditioned on the keys, and the second inequality follows by Lemma 5.6. ∎

Remark 4.

In the construction of Theorem 5.8, the ciphertexts exchanged between and are sent without any modification, so subliminal indistinguishability clearly holds at this point.

5.2 Embedding Random Strings

In this section, we assume that both parties have access to a public parameter and construct a protocol which allows for embedding of uniformly random strings into ciphertexts of an adversarially chosen encryption scheme , as required by Steps 1b and 2 of the construction outline (Definition 5.2). The security guarantee is that for a uniformly random parameter and uniformly random strings to be embedded, the ciphertexts of with embedded random strings are indistinguishable from ciphertexts of produced by direct application of , even to an adversary who knows the decryption keys of . This can be thought of as a relaxation of subliminal indistinguishability (Definition 3.1) where the two main differences are that (1) the parties have shared knowledge of a random seed, and (2) indistinguishability only holds when embedding a random string, rather than for arbitrary strings. We first present a construction to embed logarithmically many random bits (Theorem 5.9) and then show how to sequentially compose it to embed arbitrarily polynomially many random bits (Theorem 5.10). These constructions rely on a strong seeded extractor that can extract logarithmically many bits from sources of super-logarithmic min-entropy. Almost universal hashing is a simple such extractor, as stated in Proposition 5.11.

Theorem 5.9.

Let be a strong seeded extractor for super-logarithmic min-entropy with , and let be a semantically secure encryption scheme with ciphertext space . Let be defined as in Algorithm 1, then the following guarantees hold:

Public parameter: (a -bit seed).
Input: where is the string to be embedded.

  1. Generate encryption .

  2. If , then output . Else, go back to step 1.

Algorithm 1 Rejection sampler
  1. Correctness: for any and , if , and , then .

  2. Security: there exists a negligible function such that writing , and , the following holds:

Proof.

Define , an encryption of independent of . By definition of rejection sampling, . Since is a strong extractor for super-logarithmic min-entropy, and since has super-logarithmic min-entropy, there exists a negligible function such that:

The statistical distance can only decrease by applying on both sides, hence:

which proves the security guarantee. Correctness is immediate. ∎

Remark 5.

Rejection sampling is a simple and natural approach that has been used by prior work in the steganographic literature, such as [BC05]. Despite the shared use of this common technique, our construction is more different from prior art than it might seem at first glance. The novelty of our construction arises from the challenges of working in a model with a stronger adversary who can choose the distribution of ciphertexts (i.e., the adversary gets to choose the public-key encryption scheme ). We manage to bypass the impossibilities outlined in Section 4 notwithstanding this stronger adversarial model, and in contrast to prior work, construct a protocol to established a shared seed from scratch, rather than simply assuming that one has been established in advance.

We now sequentially compose Theorem 5.9 to embed longer strings.

Theorem 5.10.

Let be the rejection sampler defined in Algorithm 1. Let and be a uniformly random message of bits. For , we write where is a block of bits from and . Given cover messages , define , , , then there exists a negigible function such that:

Proof.

Define , then:

where the first inequality is by independence of the sequences and conditioned on the keys, and the second inequality is by Theorem 5.9. ∎

Finally, we observe that almost universal hashing is a strong seeded extractor for super-logarithmic min-entropy which has negligible error when the output length is (Proposition 5.11). This exactly satisfies the requirement of Theorem 5.9. Moreover, the seed legnth of this extractor is only super-logarithmic, meaning that the seed can be established, in Step 1a of the Setup Phase (Definition 5.2), in many exchange-rounds of communication.

Proposition 5.11.

Let be a negligible function and let be a family of -almost pairwise independent hash functions mapping to , then the extractor