1 List of Abbreviations
completely positive and trace preserving
non-adaptive chosen-ciphertext attack
adaptive chosen-ciphertext attack
Decision Learning with Errors
indistinguishable encryptions under chosen-plaintext attack
indistinguishable encryptions under non-adaptive chosen-ciphertext attack
indistinguishable encryptions under adaptive chosen-ciphertext attack
indistinguishable encryptions under quantum chosen-plaintext attack
indistinguishable encryptions under non-adaptive quantum chosen-ciphertext
indistinguishable encryptions under adaptive quantum chosen-ciphertext attack
Learning with Errors
Learning Parity with Noise
probably approximately correct
probabilistic polynomial time
positive operator valued measure
quantum chosen-plaintext attack
non-adaptive quantum chosen-ciphertext attack
adaptive quantum chosen-ciphertext attack
quantum Fourier transform
quantum-secure pseudorandom function
quantum polynomial time
SKES symmetric-key encryption scheme
semantic security under non-adaptive chosen-ciphertext attack
semantic security under adaptive chosen-ciphertext attack
semantic security under non-adaptive quantum chosen-ciphertext attack
Most of our present day communication takes place on the internet and produces enormous amounts of personal data. Whereas traditional notions of security are concerned with electronic mail or bank transfers, today’s security needs have since expanded to many unexpected areas such as smartcards, medical devices or modern cars. Cryptography, understood as the science of secure communication, is becoming increasingly relevant for our safety in the modern world. For many years, popular cryptographic protocols such as , the Diffie-Hellman key-exchange or ellyptic curve cryptography , have served greatly as building blocks towards establishing secure communication, despite lower costs and ever increasing computational power on the markets. In 1994, Peter Shor proposed an efficient quantum algorithm for the factoring of integers and the computation of discrete logarithms [Sho94], a profound discovery that drew the attention towards the field of quantum computation and its potential impact on cryptography. Many of the protocols still in use today, such as , or , are completely broken by attackers in possession of quantum computers running Shor’s algorithm. This discovery is oftentimes regarded as the beginning of a new race towards post-quantum cryptography, a security standard for secure classical communication, even in the presence of quantum computers [BL17]. At the same time, modern quantum technology also enables entirely new forms of communication, such as quantum key distribution [BB84]. Due to both practical and economical reasons, it is nevertheless reasonable to suspect that some form of classical communication will continue to exist for years to come, particularly for implementations on light-weight devices. Even as reliably fault-tolerant quantum computers have yet to be built, the cryptographic community has already started shifting towards a new direction in which the feasibility of classical cryptography in a quantum world presents us with a paramount challenge.
A fundamental approach in cryptography is the use of hard computational problems towards
the implementation of secure communication. Consider, for example, the protocol whose security is based on the fact that
factoring large integers appears to be computationally intractable on a classical computer. Ever since the discovery of Shor’s algorithm, the search towards
computational hardness in a quantum world has dominated
the cryptographic community. Since 2005, the Learning with Errors () problem [Reg05] has gained the status of a promising cryptographic
basis of hardness, in particular in a post-quantum setting.
The central promise of the problem lies in a reduction in which it is shown to be as hard as worst-case lattice problems [Reg09],
a class of computational problems believed to be hard for more than two decades.
Consequently, it is tempting to build cryptographic constructions on the basis of the problem and
achieve security under the assumption that worst-case lattice problems are likely to remain hard for quantum computers.
Apart from being a candidate for security against
quantum computers, many private companies have also shown interest in variants of due to its promise for
light-weight implementation, as compared to many other promising schemes in post-quantum cryptography.
As of today, the security of lattice-based cryptography against quantum computers remains
one of the key areas of modern research in cryptography.111For an excellent review on modern cryptography in the age
of quantum computers, we refer to a popular science article in
a 2015 issue of Quanta Magazine: www.quantamagazine.org/quantum-secure-cryptography-crosses-red-line-20150908/
In a nutshell, the problem in [Reg09] is as follows:
Learning with Errors Problem:
Given an integer and modulus , learn a secret string given a set of random noisy linear equations over on .
For example, for and modulus , each equation (with probability less than ) is of small additive error , and:
If is prime, the integers modulo form a finite field under addition and multiplication, hence, given enough samples on , there exists a unique solution to the problem. In our case, the hidden string to be determined is . If not for the error, the secret string can be recovered in polynomial time using Gaussian elimination after observing linear independent equations, where denotes the length of the string. Let us also note that the probability of acquiring linear independent equations on after only observing sample queries is easily shown to be greater than a constant independent on .
The difficulty in decoding noisy linear equations lies in the fact that the errors propagate during the computation, hence amplify the uncertainty and ultimately lead to no information on the actual secret string. As the best known algorithm for the problem runs in time [BKW03], the problem is believed to be asymptotically intractible for classical computers. Moreover, due to the reduction in [Reg05], any breakthrough in would also most likely imply an algorithm for lattice-based problems.
In an earlier problem, Bernstein and Vazirani [BV93] considered the task of determining a hidden string from inner product of bit strings in
a setting where an algorithm is granted input access to evaluations of the function (here denotes addition modulo ):
Learn a string by making queries to a Boolean function, , where
Note that this problem features a curious resemblance to a variant of the problem in which the modulus is given by , the algorithm is free to choose all inputs (instead of receiving samples uniformly at random) and where the noise is absent from all evaluations of the function. In the classical query setting, we observe that a single query to the function can only reveal as much as a single bit of information about the secret string . In fact, this can easily be done by considering queries on strings , where the -th index is and is everywhere else. Hence, any algorithm performing the above queries achieves an overall query complexity of at least when determining the secret, as each query reveals an outcome
such that is fully determined after a total of queries to the function. Therefore, it is tempting to approach the problem by first closely examining this simplified model.
In this thesis, we consider the Bernstein-Vazirani problem in a setting in which an algorithm is given quantum access to the function, hence is able to exploit quantum parallelism and to evaluate the inner product simultaneously on a superposition of inputs. More formally, the algorithm can evaluate through a quantum operation, a black box whose inner workings towards the computation of the function are unknown to the algorithm. We introduce the notion of an oracle, a quantum operation that allows for the reversible evaluation of a function upon a set of inputs as follows:
Remarkably, as Bernstein and Vazirani [BV93] showed, only a single oracle query to the the function as in Eq.(2) is sufficient to determine the secret string. We generalize this model to a group of arbitrary positive integers under cyclic addition in a new learning problem extension of the Bernstein-Vazirani algorithm and discuss its speed-up over classical algorithms. Cross et al. [CSS14] have recently demonstrated a robustness of quantum learning for certain classes of noise in which samples are also likely to be corrupted. While this setting is known to cause most learning problems intractable for classical algorithms, the analogue using quantum samples remains easy. Recently, Grilo et al. [GK17] independently considered a similar algorithm for , a special variant of our proposed extended Bernstein-Vazirani algorithm in which is prime. While this algorithm does not solve the problem in its original formulation using classical samples, it does however suggest further caution when allowing access to quantum samples in any cryptographic application. Nevertheless, not even a quantum computer receiving classical samples, i.e. classical strings of noisy linear equations, seems to be able to challenge the hardness of [Reg09]. For this reason, is still believed to be an excellent basis of hardness in post-quantum cryptography.
While quantum superposition access is regularly shown to be a powerful model, it also possesses limitations. Our goal in this work is also to
find such limitations in order to provide quantum-secure encryption schemes, even in a setting in which an attacker has quantum access
to the encryption procedure.
An essential building block for the construction of secure cryptographic schemes is found in so-called pseudorandom functions,
a family of keyed functions that seem indistuinguishable from perfectly random functions to any adversary with limited computational recources.
In fact, recent breakthroughs in quantum cryptography allow for quantum-secure pseudorandom functions that
are secure, even if an adversary is given the ability to evaluate the function using quantum superpositions.
Remarkably, as shown by Zhandry in 2012, such constructions can be built using the classical sample hardness of in the quantum world [Zha12]:
If with classical samples is hard for quantum computers, then there exist quantum-secure pseudorandom functions.
As parallelism remains one of the key features of quantum algorithms, modern research is concerned with exploitation of the nature of complex-valued amplitudes of quantum states in order to cause them to interfere around the desired outputs through the use of quantum operations. Only then, a final measurement of the state collapses the superposition into the desired outcome with high probability. The following fact guarantees that quantum parallelism can be achieved for all efficiently computable functions [NC10]:
Any classical efficiently computable function has an efficient circuit description, hence can also be implemented efficiently using a quantum computer. Moreover, the quantum circuit for the function consists entirely of unitary gates and can thus be evaluated on a superposition of inputs due to the linearity of quantum mechanics.
A fundamental question arises immediately. Just how powerful is knowledge represented in a quantum superposition evaluating a function on all of its inputs? This thesis is concerned with both the limitation and exploitation of quantum parallelism in the context of modern cryptography.
An important attack model in cryptanalysis is that of chosen-ciphertext attacks, a setting in which an adversary exercises control over the encryption scheme, for example by manipulating an honest party into generating both encryptions and decryptions of plaintexts or ciphertexts. The security under chosen-ciphertext attacks is commonly formalized in an indistinguishability game that takes place in two phases. In the pre-challenge phase, the adversary is allowed to perform encryption and decryption queries. Then, upon a pair of two messages, the adversary receives a challenge ciphertext, an encryption of one of the two messages at random, and proceeds with another query phase. Typically, we grant encryption access during both phases, while for decryption access, we differentiate between two important variants:
(non-adaptive access) the adversary exercises partial control and can only generate decryptions prior to seeing a challenge ciphertext.
(adaptive access) the adversary exercises full control and can perform informed decryption queries both before and after the challenge phase begins, with the exception of the challenge ciphertext itself.
Oftentimes in cryptography jargon, the term lunchtime attack is adopted in order to highlight a possible realistic setting for a non-adaptive attack model, whereas an adaptive attack corresponds to full control over an honest party.
At STOC 2000, Katz and Yung [KY00] offered a complete characterization of classical security notions for private-key encryption. In the case of classical communication in a quantum world, many of these security notions are still widely unexplored, and only few separation results have been successfully proven in recent years. At CRYPTO 2013, Boneh and Zhandry first introduced the notion of adaptive quantum chosen-ciphertext security and proposed classical encryption schemes for which such security can be achieved [BZ13]. An interesting open problem concerns the class of non-adaptive quantum chosen-ciphertext attacks, a security notion in which we allow adversaries to issue quantum superposition queries to encryption and non-adaptively to decryption. In particular, it is unknown whether many of the standard encryption schemes satisfy such a weaker notion of security.
3 Technical Summary of Results
Let us now give an overview of the main contents provided in this thesis.
In Chapter 3, we review selected topics in modern cryptography required for the proposed constructions in this thesis.
In Definition 1, we introduce the concept of symmetric-key encryption schemes , a setting in which two agents, say Alice and Bob, share a matching
secret key prior to their communication.
In Definition 2, we quantify limited computational power by introducing the notion of efficient adversaries who
run algorithms with at most polynomial running time with regard to some security parameter relevant to the underlying cryptographic scheme.
A convenient security definition is one that formalizes the notion of indistinguishable encryptions.
The indistinguishability game introduces a game-based defintion of indistinguishable encryptions that takes place between an adversary and a challenger.
Here, the adversary prepares two plaintexts and and sends them to the challenger who chooses a bit uniformly at random and then
responds with an encryption of . Thus, upon receiving a challenge ciphertext, the goal of the adversary is to output .
We say that an encryption scheme has indistinguishable encryptions
if no adversary wins the indistinguishability game with nonnegligible probability better than the trivial adversary who guesses at random.
We introduce the notion of indistinguishable encryptions under chosen-plaintext attacks (Definition 3), as
well as under chosen-ciphertext attacks (Definition 4).
Another intuitive definition of security we consider is semantic security (Definition 5), a notion of security that emphasizes
the possibility of an adversary attempting to compute something meaningful upon the encryption of a plaintext, such as a function of the plaintext.
In the semantic security game, the adversary is given an encryption of a plaintext and some side information , and the goal is to compute
a function evaluated at the plaintext.
We say that an encryption scheme has semantic security
if every adversary is approximately identical to a simulator who is given the side information only. Therefore, semantic security
formalizes the intuition that even if the adversary has access to the ciphertext, essentially no advantage in computing anything
meaningful from it exists.
Furthermore, in Definition 6, we define the concept of pseudorandom functions (), a crucial building block in
symmetric-key cryptography that
allows for constructions of symmetric-key encryption schemes of precisely such security. The standard scheme is defined as follows:
scheme (informal) Given a family of pseudorandom functions , we define the scheme which encrypts a plaintext using randomness via
To decrypt a ciphertext , the procedure outputs .
of certain bounded noise magnitude.
The key for this scheme is a random vector
Finally, we define the problem rigorously and discuss its applications in cryptography. We consider the standard -secure -based symmetric-key encryption scheme:
scheme (informal) The symmetric-key encryption scheme -SKES is defined by an integer , a modulus and a discrete error distribution over
of certain bounded noise magnitude. The key for this scheme is a random vector. We encrypt a bit as follows:
Sample a uniformly random vector and an error ;
To decrypt a ciphertext , we output if and only if (here we rely on the assumption that the error magnitude is bounded: ).
This scheme satisfies (classical) security under the assumption [Reg09]. We then consider the -SKES scheme to establish a separation between the previous notions of indistinguishable encryptions, both under chosen-plaintext attacks, as well as under non-adaptive chosen-ciphertext attacks.
In Chapter 4 , we present the most important developments in the theory of quantum computation to date.
To this end, we introduce the concept of qubits, unitary quantum operations and the quantum circuit model. We present a universal set of quantum gates
that enables a quantum computer to approximately perform any quantum operation (
, we present the most important developments in the theory of quantum computation to date. To this end, we introduce the concept of qubits, unitary quantum operations and the quantum circuit model. We present a universal set of quantum gates that enables a quantum computer to approximately perform any quantum operation (Theorem 2). Moreover, we give examples of quantum parallelism and show how to prepare a quantum state that evaluates a given function simultaneously over the range of its inputs. In this context, we introduce the concept of quantum oracles, essentially a quantum gate that acts as a black box and grants an algorithm input access to a given function. Finally, we turn to noise and decoherence in quantum computing architectures and give examples of elementary error correcting codes.
In Chapter 5, we review several of the well known quantum algorithms that solve certain computational tasks faster than any known classical algorithm and provide the foundation for the algorithms of the later chapters. In particular, we introduce the Deutsch-Josza algorithm, the earliest quantum speed-up ever to be found in a black box model, as well as the Bernstein-Vazirani algorithm as the original predecessor of the Extended Bernstein-Vazirani algorithm.
In Chapter 6, we introduce the quantum Fourier transform () over arbitrary finite abelian groups as a fundamental operation adopted in the majority of all the algorithms discussed in this thesis. The Fourier transform (Definition 9) is particularly useful in exploiting the symmetries of a given problem and allows us to generalize the Bernstein-Vazirani algorithm over arbitrary cyclic groups. In Lemma 1, we prove a widely used property on the orthogonality of Fourier coefficients. Finally, we discuss efficient quantum circuit implementations that compute the quantum Fourier transform.
In Chapter 7, we introduce useful language from computational learning theory in which we frame the main algorithms in this thesis.
We consider a setting in which a learner (an algorithm) is requesting samples from a black box oracle whose inner workings are unknown.
The goal of the learner is to determine a hidden concept, such as a Boolean function, based on the information that is being presented by the samples.
As each sample may be subjected to noise, potential errors are likely to get amplified and oftentimes lead to highly non-trivial tasks that are
computationally intractable for classical computers.
We consider the Learning Parity with Noise () problem, an early predecessor of the problem,
as an instance of a computational learning problem.
Once we define the analogous learning problem in a setting in which the oracle is providing quantum samples, we investigate how these
computational tasks become easy for quantum computers. We approach a quantum analogue by first proposing a new generalization
of the Bernstein-Vazirani algorithm over
an arbitrary group under cyclic addition. We then prove Theorem 1 and show the following:
Theorem (informal) There exists a quantum algorithm for the Extended Bernstein-Vazirani problem that can be amplified towards a success probability of by requesting many samples independently of , whereas any classical algorithm requires many queries.
In addition, we compare our results to an independent 2017 proposal by Grilo and Kerenidis that proves that, in the quantum oracle setting, the extended Bernstein-Vazirani algorithm (in the special case where is prime) solves the problem given enough quantum samples.
In Chapter 8, we take a turn towards studying the limitations of quantum algorithms in order to
find secure constructions for post-quantum cryptography. While the previous chapter focused
on quantum speed-ups at solving learning problems by means of superposition samples, this chapter investigates
the limitations of quantum algorithms instead. We discuss the effects of relabeling in quantum algorithms,
a setting in which we relabel the function to which the algorithm is given oracle access at a subset of the domain
and study its subsequent output states, similar to the blinding of quantum algorithms proposed by Alagic et al.[AMRS18].
We introduce two variants of a new indistuinguishability game called ,
a setting in which a quantum distinguisher receives quantum oracle access to a function and the goal is to detect
its modification as part of a game-based experiment. We distinguish between two variants,
a non-adaptive experiment in which the query phase takes place prior to the challenge, as well as an adaptive experiment in which
the query phase takes place during the challenge phase.
Thus, we define a non-adaptive relabeling game as an experiment in which a quantum algorithm first receives quantum oracle access
to a function and then, upon receiving a random input/output pair, the goal is to decide whether it is genuine (or modified)
based on the previous query phase.
Definition (informal) Given an arbitrary function , we define the non-adaptive experiment with a algorithm as follows:
a bit and strings , are generated;
receives quantum oracle access to ;
depending on the random bit , receives the following:
receives a pair ;
receives a pair .
Then, receives an example oracle that outputs classical random pairs .
outputs a bit and wins the game if .
We then prove Theorem 16 by controlling the success probability of in terms of the number of queries it makes.
The proof uses a hybrid argument, adapting a variation of the standard quantum query lower bound technique, as well as the bound on the effects of blinding in [AMRS18], to give precise control over the success probability.
Theorem (informal) Given an arbitrary function , any efficient quantum algorithm making many oracle queries succeeds at the non-adaptive experiment with advantage at most , except with negligible probability.
Next, we consider a stronger variant of the relabeling game (Definition 17), an adaptive setting in which a quantum algorithm first receives an arbitrary advice state (possibly even exponential-sized) for a function and the goal is to detect whether it was relabeled at a random location.
Definition (informal) Given a function , an arbitrary quantum advice state (possibly depending on ) and integer , we define the adaptive experiment with a algorithm as follows:
receives an advice state ;
a bit and strings , are generated;
depending on the random bit , receives the following:
receives quantum oracle access to ;
receives quantum oracle access to , where is the relabeled function,
outputs a bit and wins the game if .
Unlike in the previous non-adaptive variant, any distinguisher is able to adaptively make queries based on prior information on the target function from the pre-challenge phase.
Finally, we prove Theorem 2 on the success probability of the adaptive experiment.
Theorem (informal) Given an arbitrary function with arbitrary advice state (possibly depending on ) and integer , any efficient quantum algorithm making oracle queries succeeds at the adaptive experiment with advantage , except with negligible probability.
In choosing to be super-logarithmic in , we can achieve a negligible advantage in the game.
In Chapter 9, we extend the notions of classical
indistinguishability from the earlier chapters to a quantum world. We make use of the rebeling result and propose secure
constructions under a quantum chosen-ciphertext attack. In this scenario, a quantum adversary
exercises control over the functionality of the scheme and is able to influence an honest party
into quantumly generating ciphertexts, as well as decrypting ciphertexts of the adversaries choice for some period in time.
We introduce several new quantum notions of security, such as indistinguishable encryptions under
non-adaptive quantum chosen-ciphertext attacks (Definition 18), as follows:
Definition (informal) is if no quantum polynomial time algorithm can succeed at the following experiment with probability better than .
A key and a uniformly random bit are generated;
gets access to oracles and , and outputs ;
receives a challenge and access to only; then outputs a bit ;
wins if .
We then introduce a quantum variant of semantic security under non-adaptive quantum chosen-ciphertext attacks.
Finally, we prove that our proposed constructions based on quantum-secure pseudorandom functions
satisfy our definitions.
Theorem (informal) If is a family of quantum-secure pseudorandom functions, then the scheme is -secure.
Moreover, we prove that quantum-secure pseudorandom functions are not strictly necessary to achieve security of the scheme. We consider a choice of post-quantum secure pseudorandom functions , i.e. families of functions that are secure against quantum distinguishers with classical access to the function, by equipping a with a random large period. Note that due to quantum period finding, as observed in [BZ13], it follows that, if is a family of s, then is only post-quantum secure. Finally, we prove that the scheme under achieves security.
Theorem (informal) There exist families of post-quantum-secure pseudorandom functions for which the scheme is -secure.
In Chapter 10, we discuss state-of-the-art quantum computing technology with a particular focus on the ion-trap architecture. We give a detailed introduction to how qubits are realized in a physical system and how quantum gates can be performed through the use of lasers. Furthermore, we discuss sources of noise and decoherence in physical systems in order to investigate the effectiveness of noise models from the previous chapters. To this end, we discuss the performance of recent implementations of quantum algorithms discussed in this thesis. Finally, we discuss an experimental comparison between a five-qubit ion-trap implementation and the five-qubit IBM superconductor device.
The history of cryptography dates back to over two millenia. Ever since the birth of civilization and the invention of writing, people required ways of transmitting secret messages using ciphers, intended to be read only by the receiver and yet difficult to decode for others. Since the 1970s, cryptography amounted to a well-established scientific discipline by henceforth adopting a rigorous mathematical foundation. This crucial change marks the beginning of modern cryptography. Many of the popular encryption schemes still in use today, such as the encryption scheme, were already developed in the early years of modern cryptography. Typically, it is the hardness of certain computational problems that serves as a foundation for security. For example, as in the case of , the security of the encryption scheme is related to the hardness of factoring large integers. In other words, we believe a scheme is secure, if no efficient adversary with limited computational recources is capable of breaking the scheme. Peter Shor’s discovery of an efficient quantum algorithm for the factoring of integers marked the beginning of an entirely new era of cryptography, a so-called post-quantum cryptography. It is from here on, that the search for quantum-secure cryptography began. In the following sections, we provide an overview of selected topics in modern cryptography required for the main results in this thesis.
Let us first introduce some necessary notation and formalism from theoretical computer science and cryptography. For additional reading, we refer to [KL15].
For bit strings of arbitrary length , we associate a product space containing all such strings of finite length.
A function is called negligible if, for every polynomial ,
there exists an integer such that for all , it holds that:
Typically, we adopt negligible functions in the context of a success probability that
decreases to an inverse-superpolynomial rate, hence cannot be amplified to a constant by a polynomial amount of repetitions.
An algorithm is a sequence of (possibly nondeterministic) operations that
terminates after a finite amount of steps upon any given input, say .
We say an algorithm is efficient if it has polynomial running time
with respect to a size parameter of a given computational problem, i.e. if there exists a polynomial
such that, for any input , the computation of terminates after at most steps.
A probabilistic polynomial time algorithm is a procedure with an additional random tape (such as a random number generator) that
results in efficient, yet possibly nondeterministic, computations. We adopt the popular unary convention of representing the seed of
efficient randomized algorithms by , highlighting a polynomial dependence with respect to the length of the input,
contrary to a polylog dependence in the general case where bits are needed to specify the length of a given input
(here, denotes the ceiling function). With , we denote a procedure
an outcome is sampled uniformly at random from a finite set . If is a probability distribution,
we denote the sampling of an outcome according to
is a probability distribution, we denote the sampling of an outcome according toby using the notation . Upon finite sets and , we define the corresponding (finite) set of all possible functions from to as . An oracle is a black box machine that assists a given algorithm with a particular computational task at unit cost, for example in an evaluation of an unknown function upon a given input or the sampling from an unknown probability distribution. Typically, if is an algorithm, we denote oracle access to using the notation . Finally, throughout this thesis, we employ the usual asymptotic -notation denoting an upper bound, where for a given function , we define . Similarly, we denote an asymptotic lower-bound as the set of functions .
4.2 Symmetric-Key Cryptography
Symmetric-key cryptography concerns the scenario in which two agents, say Alice and Bob, share a mutual secret key prior to their communication and want to send messages to each other. In order to encrypt messages, Alice first chooses a message and runs an encryption algorithm that requires the use of her key and later sends the resulting ciphertext over to Bob. Since Bob knows about the secret key, he can run a decryption algorithm upon Alice’s ciphertext and decode the message. In general, we consider randomized encryption in order to avoid replay attacks, while only requiring decryption to be deterministic.
A symmetric-key encryption scheme is a triple of algorithms on a finite key space , message space and ciphertext space , where , , and, for a security parameter , we require:
(key generation) : on input , generate a key ;
(encryption) : on message , output a ciphertext ;
(decryption) : on cipher , output a message ;
In order for communication under a given symmetric-key encryption scheme to be secure against eavesdroppers,
we require that, without knowledge of the secret key, any ciphertext
must look sufficiently random and reveal little to no information about the actual message.
In the next section, we provide several widely used notions of security for symmetric-key encryption. For further reading, we refer to [KL15].
4.3 Security Notions
4.3.1 Computational Security
Due to the well known - problem, i.e. the seeming impossibility of finding efficient algorithms for certain computational problems whose solutions can be quickly verified,
and the fact that we consider adversaries who operate probabilistically, an important notion of security is provided by computational security
based on the following principle:
A successful cipher must be practically secure against adversaries with limited computational recources.
This brings us to the following standard definition of computational security:
Definition 2 (Computational Security)
A scheme is computationally (or asymptotically) secure if every adversary succeeds at breaking with at most negligible probability with respect to the security parameter of .
Since a negligible success probability is smaller than the inverse of any polynomial, no efficient algorithm is capable of amplifying the success probability, i.e. capable of breaking the encryption scheme by sheer repetition. Therefore, we regard any algorithm that breaks a particular scheme with at most negligible probability as not significant.
4.3.2 Computational Indistinguishability.
Another important notion of security for a given symmetric-key encryption scheme is indistinguishability of encryptions, in particular under a chosen-plaintext attack. In this model, an adversary has partial control over the encryption procedure and can generate encryptions of arbitrary messages. This attack corresponds to a scenario in which an attacker is able to influence an honest party into generating ciphertexts of the adversaries choice, thus potentially resulting in an advantage at decoding other ciphers of interest. In the following, we specify this model in a security game between an adversary and a challenger:
Definition 3 ()
Let be a symmetric-key encryption scheme and consider the between a adversary and challenger, defined as follows:
(initial phase) the challenger chooses a key and bit ;
(pre-challenge phase) as part of a learning phase, the adversary is given access to an encryption oracle in order to generate encryptions. Upon each choice of message , the adversary receives a ciphertext . Finally, the adversary chooses two messages and , and sends them to the challenger.
(challenge phase) the challenger replies with and the adversary continues to have oracle access to ;
(resolution) the adversary outputs a bit and wins the game if .
We say that has indistinguishable encryptions under a chosen-plaintext attack (or is -secure) if, for every ,
there exists a negligible function such that:
An even stronger notion of security for a given symmetric-key encryption scheme is security under chosen-ciphertext attacks. In this variant of the , an adversary not only exercises control over the encryption scheme as before, but can also non-adaptively decrypt messages unrelated to a ciphertext of interest (as highlighted in the pre-challenge and challenge phase). Therefore, such an attack corresponds to a scenario in which an attacker is able to exercise control over an honest party into generating ciphertexts, as well as decrypting ciphertexts of the adversaries choice for some period in time. In the following, we specify this model in another security game between an adversary and a challenger:
Definition 4 ()
Let be a symmetric-key encryption scheme and consider the between a adversary and challenger, defined as follows:
(initial phase) the challenger chooses a key and bit ;
(pre-challenge phase) as part of a learning phase, the adversary is given access to both an encryption oracle and decryption oracle . Upon each choice of message , the adversary receives a ciphertext and, upon each ciphertext , the adversary receives a plaintext . Finally, the adversary chooses two messages and , and sends them to the challenger.
(challenge phase) the challenger replies with and the adversary continues to have oracle access to only;
(resolution) the adversary outputs a bit and wins the game if .
We say that has indistinguishable encryptions under a chosen-ciphertext attack (or is -secure) if, for every ,
there exists a negligible function such that:
Finally, we can additionally extend the previous notion of security by also granting the adversary adaptive decryption access after the challenge phase. This model corresponds to security, a variant in which the adversary exercises full control over the encryption scheme, both before and after the challenge phase. Remarkably, there exist classical symmetric-key encryption schemes that satisfy each of the security definitions provided in this chapter. A major contribution of this thesis is to provide constructions that satisfy these notions, even in a setting in which the adversary is granted quantum superposition access, again both to the encryption and decryption procedure. In the next section, we introduce important tools to realize such cryptographic schemes.
4.3.3 Semantic Security.
In semantic security, the challenge phase corresponds to choosing a challenge template instead of a pair of messages. Contrary to the , the intuition for this security game is that the adversary seeks to compute something meaningful about the message of interest during the challenge phase. Thus, we consider challenge templates consisting of a triple of classical circuits , where outputs plaintexts from some distribution , and and are functions over messages . Upon receiving an encryption of a randomly sampled message according to , the goal of the adversary is to output some new information , given some side information on the message. In providing an adversary with a learning phase, we can consider the following notion of security.
Definition 5 ()
Let be an encryption scheme, and consider the experiment with a , defined as follows.
(initial phase) A key and bit are generated;
(pre-challenge phase) receives access to oracles and , then outputs a challenge template consisting of ;
(challenge phase) A plaintext is generated; receives and an oracle for only; if , also receives .
(resolution) outputs a string , and wins if .
We say is semantically secure under a non-adaptive chosen-ciphertext attack (or is ) if, for every , there exists a such that the challenge templates output by and are identically distributed, and there exists a negligible function such that:
where, in both cases, the probability is taken over plaintexts .
Fortunately, as shown in [KL15], semantic security and indistinguishability are equivalent notions of security, in particular under non-adaptive chosen-ciphertext attacks.
Let be a symmetric-key encryption scheme. Then, is -secure if and only if is -secure.
In Chapter 10, we introduce variants under quantum chosen-ciphertext attacks and prove the equivalence of both definitions. While semantic security is a much more intuitive notion of security, it is oftentimes much harder to prove security in practice. Therefore, it is convenient to provide security proofs under the notion of indistinguishable encryptions and to then refer to the equivalence result for a more natural notion of security.
4.4 Pseudorandom Functions
In this section, we turn to pseudorandom functions, a popular building block in symmetric-key cryptography. Historically, the first instance of provably-secure pseudorandom functions was proposed in the Goldreich, Goldwasser and Micali construction [GGM86] using pseudorandom generators, which in turn rely on the existence of one-way functions. The effectiveness of pseudorandom functions lies in the property of seeming indistinguishable from a perfectly random function to any efficient distinguisher with limited computational power. The security properties of pseudorandom functions are perhaps best explained in an indistinguishability game between a distinguisher (a algorithm) and a challenger. Upon the start of the game, the challenger chooses a random bit whose outcome determines whether the game is being played with a perfectly random function (sampled uniformly at random from the finite set of all possible functions over given finite domain and range) of the challengers choice, or a pseudorandom function for a freshly generated key. Next, the challenger presents the distinguisher with an oracle for the given function who is then free to evaluate the function upon arbitrary inputs. Finally, the distinguisher wins by outputting a bit . Since the distinguisher is assumed to have limited computational recources, thus essentially running a algorithm, the claim of pseudorandomness is that the outputs will look sufficiently random. Therefore, the probability that the distinguisher makes a decision in a game against a pseudorandom function and outputs a bit, say , is negligibly close to a game in which the distinguisher is playing against a perfectly random function. We formalize this observation in the following definition:
Let be an efficiently computable function on a key-space , a domain and a range . We say is a family of pseudorandom functions if, for all and distinguishers , there exists a negligible function such that:
Consider, for example, the following SKES using a pseudorandom function, as found in Proposition 5.4.18 in [Gol04]. In this scheme, the pseudorandom function is used to both encrypt and decrypt messages using the same key.
Consider a family of keyed functions , where is a security parameter and is a key space. Define a symmetric-key encryption scheme as follows:
(key generation) : on input , generate a key ;
(encryption) : on message , choose a randomness and output a ciphertext ;
(decryption) : on cipher , output ;
In fact, this scheme already satisfies the notion of security, for example as shown in [KL15]. In Chapter 10, we introduce a class of quantum-secure pseudorandom functions and prove the security of this scheme, even in a setting in which the adversary is given quantum superposition access to the encryption oracle and decryption procedure .
In the next section, we provide a formal definition of the Learning with Errors problem, as introduced in [Reg09].
4.5 Learning with Errors
The Learning with Errors problem can be stated in multiple variants, such as the search problem or the decision problem. In the following, we begin by first defining the Learning with Errors search problem, as introduced in the introductory section.
Definition 7 ( Problem)
Let be a security parameter, let be a prime and let be a discrete probability distribution over errors in . Let be a secret string and let be the probability distribution on that performs the following:
Sample a uniformly random string .
Sample an error according to error distribution .
We say that a algorithm solves the Learning with Errors problem with modulus and error distribution if, for any and an arbitrary number of independent noisy samples from , outputs the secret with nonegligible probability.
Typically one chooses an error distribution that follows a discrete Gaussian distribution rounded to
the nearest integer and reduced modulo allows us to conveniently control the
that follows a discrete Gaussian distribution rounded to the nearest integer and reduced modulo, where the noise magnitude is taken to be . Chebyshev’s inequality
allows us to conveniently control the standard deviationtowards a sharply peaked error distribution around the origin for an appriopriate choice of parameters and . As Regev argues, there are several reasons that speak in favor of the hardness of the problem, particularly its close relationship to lattice-problems and the Learning Parity with Noise problem [CSS14], both studied extensively and believed to be hard. Since can be thought of as a generalization of the problem, we believe that must also be hard. Furthermore, the best known classical algorithms for solving the problem so far run in exponential time [BKW03].
4.5.1 Decision Learning with Errors.
A related variant of the problem is found in the task of determining whether a given sample results from a noisy linear equation on a secret string, or a genuine uniform random sample.
Definition 8 (Decision )
Let be given by a sampling probability distribution for a string and
let be the uniform distribution over
be the uniform distribution over. We say that satisfies the decisional assumption with modulus and error distribution if, for all distinguishers , there exists a negligible function such that:
where outputs uniform samples .
Remarkably, as Oded Regev showed, there exists a simple reduction of the search problem towards the decisional problem. While it is clear that an efficient algorithm for the search problem implies the existence of an algorithm for the decisional problem, the opposite implication is guaranteed by the following lemma:
Lemma 1 ([Reg09], Decision to Search )
Let be a security parameter, be a prime and let be a sampling probability distribution for a string and discrete probability distribution over errors in . If is an algorithm that solves the problem with nonegligible probability over a uniform choice of strings , then there exists an efficient algorithm that receives samples from and solves the search problem with probability exponentially close to .
4.5.2 Symmetric-Key Constructions and Security.
Let us now consider the following symmetric-key encryption scheme motivated by the hardness assumption, as suggested in [Reg05]. In this example of an encryption scheme, the secret string acts as a key and we encrypt a single bit by computing an sample in a suitable way that can be detected by a receiver in possession of the key.
Construction 2 ( scheme).
Let be an integer, let be a modulus and let be a discrete error distribution over and consider the following symmetric-key encryption scheme , defined as follows:
(key generation) run and generate a key ;
(encryption) upon a bit , sample a string and error , and output ;
(decryption) upon cipher , apply rounding to output if and only if , else output .
Using the decisional assumption, we can easily show that -SKES indeed satisfies a notion of indistinguishability under a chosen-plaintext attack.
Let -SKES be the symmetric-key encryption scheme from Construction 2. Then -SKES is -secure.
We introduce a hybrid game by modifying the security game in a way that is indistinguishable (to any adversary) from the original game in order to arrive at a security game in which the challenge is perfectly hidden.
- Game 0:
In the standard hybrid, the adversary is playing the security game for the original scheme in Construction 2. Prior to the challenge, the adversary chooses message bits and is given access to an encryption oracle . Upon receiving a challenge cipher , the adversary may perform additional queries to the encryption oracle and the goal is to decide whether the challenge corresponds to an encryption of or .
- Game 1:
In the this hybrid, the challenger instead responds with uniformly random samples upon each encryption query, as well as with a challenge . From the decisional assumption in Definition 8 and Lemma 1, it follows that no adversary can distinguish between genuine samples or uniformly random samples (both with added to them).
Since adopting this hybrid game only negligibly affects the success probability of any adversary, we arrive at a security game in which the adversary cannot win, except with at most negligible probability better than guessing at random.
4.5.3 Separation Result.
In preparation for the sections on post-quantum cryptography in which we study quantum access to decryption, let us now conclude this chapter with a simple separation between the two notions of security from Section 4.3 and show that there exist schemes that are -secure but not -secure. Using the -SKES scheme, we can easily prove such a separation. The intuition is that decryption oracle access in this scheme allows the adversary to evaluate the noisy inner product upon arbitrary inputs. As a result, the adversary can determine parts of the secret key using only a few queries to its decryption oracle.
Let -SKES be the symmetric-key encryption scheme from Construction 2. Then -SKES does not satisfy -security.
The following algorithm recovers the key using close to a linear number of classical decryption queries.