1 Introduction
Oblivious simulation of RAM machines, initially studied in the context of software protection by Goldreich and Ostrovsky [GO96], aims at protecting the memory access pattern induced by computation of a RAM. In the present day, such oblivious simulation might be needed when performing a computation in the memory of an untrusted server.^{1}^{1}1Protecting the memory access of a computation is particularly relevant in the light of the recent Spectre [KGG18] and Meltdown [LSG18] attacks. Despite using encryption for protecting the contents of each memory cell, the memory access pattern might still leak sensitive information. Thus, the memory access pattern should be oblivious of the data being processed and, optimally, depend only on the size of the input.
Constructions.
The strong guarantee of obliviousness of the memory access pattern comes at the cost of additional overhead. A trivial solution which scans the whole memory for each memory access induces linear bandwidth overhead, i.e., the multiplicative factor by which the length of a memory access pattern increases in the oblivious simulation of a RAM with memory cells. Given the practical applications, an important research direction is to construct an ORAM with as low overhead as possible. The foundational work of Goldreich and Ostrovsky [GO96] already gave a construction with bandwidth overhead . Subsequent results introduced various improved approaches for building ORAMs (see [Ajt10, CLP14, CP13, DMN11, GGH13, GO96, GM11, GMOT11, KLO12, PPRY18, RFK14, SvDS18, WCS15, WHC14] and the references therein) leading to the recent construction of Asharov et al. [AKL18] with bandwidth overhead for the most natural setting of parameters.
Lowerbounds.
It was a folklore belief that an bandwidth overhead is inherent based on a lower bound presented already in the initial work of Goldreich and Ostrovsky [GO96]. However, the GoldreichOstrovsky result was recently revisited in the work of Boyle and Naor [BN16], who pointed out that the lower bound actually holds only in a rather restricted “balls in bins” model where the ORAM is not allowed to read the contents of the data cells it processes. In fact, Boyle and Naor showed that any general lower bound for offline ORAM (i.e., where each memory access of the ORAM can depend on the whole sequence of operations it needs to obliviously simulate) implies nontrivial lower bounds on sizes of sorting circuits which seem to be out of reach of the known techniques in computational complexity.
The first general lower bound for bandwidth overhead in online ORAM (i.e., where the ORAM must process the operations it has to obliviously simulate in a sequential manner) was subsequently given by Larsen and Nielsen [LN18]. The core of their lower bound was an adaptation of the information transfer technique of Patrascu and Demaine [PD06], originally used for proving lower bounds for data structures in the cell probe model, to the ORAM setting. In fact, the lower bound of Larsen and Nielsen [LN18] for ORAM can be cast as a lower bound for the oblivious Array Maintenance problem and it was recently extended to other oblivious data structures by Jacob et al. [JLN19].
1.1 Our Results
In this work, we further develop the information transfer technique of [PD06] when applied in the context of online ORAMs. We revisit the lower bound of Nielsen and Larsen, which was proved under an assumption about the format of the memory access pattern of the ORAM. Specifically, we prove a matching lower bound in a stronger model without any restriction on the format of the server memory access sequence.
Theorem 1.1 (Informal).
Any online ORAM which satisfies the statistical security and has internal memory of size must have expected overhead where and is the length of an input sequence of operations. This result holds even when the adversarial server has no information about boundaries between probes corresponding to different operations.
In the computational setting, our techniques give the following.
Theorem 1.2 (Informal).
Any online ORAM which satisfies the computational security and has internal memory of size must have expected overhead . This result holds even when the adversarial server has no information about boundaries between probes corresponding to different operations.
Note that this is still an interesting result. It follows from the work of Boyle and Naor [BN16] that any super constant lower bound for offline ORAM would imply superlinear lower bounds on size of sorting circuits – which would constitute a major breakthrough in computational complexity (for additional discussion, see Section 5).
As an additional contribution, we clarify the ORAM model in which our techniques yield a lower bound. See Definition 2.1 and Section 5 for additional discussions.
Besides online ORAM (i.e., the oblivious Array Maintenance problem), our techniques naturally extend to other oblivious data structures and allow to generalize also the recent lower bounds of Jacob et al. [JLN19] for oblivious stacks, queues, deques, priority queues and search trees.
1.2 Our Techniques
The structure of our proof follows a similar blueprint as the work of Larsen and Nielsen [LN18]. However, we must handle new issues introduced by the more general adversarial model. Most significantly, our proof cannot rely on any formatting of the access pattern, whereas Larsen and Nielsen could have leveraged the fact that the access pattern is split into blocks corresponding to each read/write operation. To handle the lack of structure in the access pattern, we study the properties of the access graph induced naturally by the access pattern of an ORAM computation. We identify a particular graph property that can be efficiently tested and that all access graphs of ORAM computation must satisfy with high probability. This property is reminiscent of the LarsenNielsen property but it is substantially less structured; that is, it is more generic.
The access graph is defined as follows: the vertices are timestamps of server probes and there is an edge connecting two vertices if and only if they correspond to two subsequent accesses to the same memory cell. We define a graph property called dense partition. Graphs with dense partitions are graphs which may be partitioned into disjoint subgraphs, each subgraph having at least edges. We show that this property has to be satisfied by access graphs with high probability for any and an appropriate . In Section 3, we prove that if a graph has dense partition for some and different values of then the graph must have at least edges.
In Section 4, we prove that access graphs of ORAMs have many dense partitions. Specifically, we show that for values of , there exist input sequences for which the corresponding graph has dense partition by a communicationtype argument. Applying the indistinguishability of sequences of probes made by ORAM, we get one sequence for which its access graph satisfies dense partition for values of with high probability. Combining the above results from Section 4 with the results from Section 3, we get that the graph of such a sequence has edges, and thus by definition, vertices in expectation. This implies that the expected number of probes made by the ORAM on any input sequence of length is .
2 Preliminaries
In this section, we introduce some basic notation and recall some standard definitions and results. Throughout the rest of the paper, we let for to denote the set . A function is negligible if it approaches zero faster than any inverse polynomial.
Definition (Statistical Distance).
For two probability distributions
and on a discrete universe , we define statistical distance of and asWe use the following observation, which characterizes statistical distance as the difference of areas under the curve (see Fact 3.1.9 in Vadhan [Vad99]).
Proposition .
Let and be probability distributions on a discrete universe , let , and define analogously. Then
We also use the following dataprocessingtype inequality.
Proposition .
Let and be probability distributions on a discrete universe . Then for any function , it holds that .
Definition (Computational indistinguishability).
Two probability ensembles, and , are computationally indistinguishable if for every polynomialtime algorithm there exists a negligible function such that
2.1 Online ORAM
Definition (Array Maintenance Problem [Ln18]).
The Array Maintenance problem with parameters is to maintain an array of bit entries under the following two operations:

(, ): Set the contents of to , where , . (Write operation)

(, ): Return the contents of , where (note that is ignored). (Read operation)
We say that a machine implements the Array Maintenance problem with parameters and probability , if for every input sequence of operations
and for every read operation in the sequence , the machine returns the correct answer with probability at least .
Definition (Online Oblivious RAM).
For , let RAM* denote a probabilistic random access machine with cells of internal memory, each of size bits, which has access to a data structure, called server, implementing the Array Maintenance problem with parameters and probability 1. In other words, in each step of computation may probe the server on a triple and on input , where , the server returns to the data last written in . We say that probes the server whenever it makes an Array Maintenance operation to the server.
Let be any natural numbers such that . An online Oblivious RAM with address range , cell size bits and cells of internal memory is a satisfying online access sequence, correctness, and statistical (resp. computational) security as defined below.
Online Access Sequence: For any input sequence the RAM* machine gets one by one, where each . Upon the receipt of each operation , the machine generates a possibly empty sequence of server probes , where each , and updates its internal memory state in order to correctly implement the request . We define the access sequence corresponding to as . For the input sequence , the access sequence is defined as
Note that the definition of the machine is online, and thus for each input sequence and each , the access sequence does not depend on .
Correctness: implements the Array Maintenance problem with parameters with probability .
Statistical Security: For any two input sequences of the same length, the statistical distance of the distributions of access sequences and is at most .
Computational Security: For any infinite families of input sequences and such that for all , the probability ensembles and are computationally indistinguishable.
As customary, in the computational security definition we consider infinite families of ORAM where we allow to be functions of , the length of the input sequence. So the computational security is defined not for a single fixed choice of an ORAM machine but for an infinite family.
For ease of exposition, we assume perfect correctness of the ORAM (i.e., ). However, our lower bounds can be extended also to ORAM with imperfect correctness (see discussion after Lemma 4).
The parameters of the ORAM model from Definition 2.1 are depicted in Figure 1. We use different sizes of arrows on server and RAM side to denote the asymmetry of the communication ( sends type of operation, address and data, and server returns just data in case of read operation and nothing in case of write). Note that the input sequence of ORAM consists of a sequence of all operations, whereas the access sequence consists of a sequence of addresses of all probes.
3 Dense Graphs
In this section, we define an efficiently testable property of graphs that we show to be satisfied by graphs induced by the access pattern of any statistically secure ORAM. This property implies that the overhead of such ORAM must be logarithmic.
We say a directed graph is ordered if is a subset of integers and for each edge , . For a graph and , we let be the set of edges that start in and end in , and for integers we let .
Definition .
A partition of an ordered graph is a sequence . We say that the partition is dense if for each , is of size at least .
There is a simple greedy algorithm running in time which tests for given integers whether a given ordered graph has an dense partition. (The algorithm looks for the parts one by one greedily from left to right.)
Lemma .
Let be a subset of powers of 4. Let be given. Let be an ordered graph which for each has an dense partition. Then has at least edges.
Proof.
We use the following claim to bound the number of edges.
Claim .
Let be integers. Let be a partition of , and be a partition of . Then for at least distinct
(1) 
Proof.
For any and , if for some then (as .) Thus, is uniquely determined by . Hence, may intersect only if , for some . Thus, such an intersection occurs only for at most different . The claim follows. ∎
Now we are ready to prove Lemma 3. For each , pick an dense partition of and define the set of edges :
For each , we lowerbound by . Since contains powers of , . By the above claim, for at least different , . By density, , so . Hence, . ∎
In the following corollary, we show that the property of having many dense partitions with some probability implies proportionally many edges. (Note that the term corresponds exactly to the number of powers of four between and .)
Corollary .
Let be natural numbers, where . Let be a real. Let be an ordered graph picked at random from a distribution such that for each integer , , the randomly chosen ordered graph has dense partition with probability at least . Then the expected number of edges in is at least .
Proof.
Let be the set of integers such that if and only if is a power of and has an dense partition.
is a random variable. The expected size of
is at least . By Lemma 3, the expected number of edges in is at least . ∎4 ORAM Lower Bound
In this section, we fix integers such that , , and an ORAM with address range , cell size and cells of internal memory (see Definition 2.1). We argue that any statistically secure ORAM must make server probes in expectation in order to implement a sequence of input operations. We also show that any computationally secure ORAM must make server probes in expectation on any input sequence of length .
Definition .
Let be an access sequence of for some input sequence . We define a directed graph called access graph as follows: and iff and and for each , .
Notice that every vertex of an access graph has an outdegree as well as indegree at most one.
In the following we consider input sequences of length where is even. First, we define , i.e., a sequence of alternating writes and reads at address with data . Second, for each , let , we define a distribution of input sequences as
where each is an independently uniformly chosen bit string. We define the th block of writes and the th block of reads to be the sequence of operations following right after . Note that after the
th block of reads the sequence is padded by a sequence of alternating writes and reads to length
. We use the notation and , where is an ORAM as defined in the beginning of this section.The following lemma uses only correctness of ORAM and does not depend on its security. The proof of the lemma uses the information transfer technique similarly to Lemma 2 in [LN18].
Lemma .
Let be as in the beginning of this section, moreover suppose is an even integer. Let be an integer such that . Let be the access sequence of and be the corresponding access graph. ( is a random variable that depends on and the internal randomness of .) With probability at least , has dense partition.
Proof.
By our assumption from the beginning of this section, , and thus for any all sequences have all addresses in the correct range. Fix any satisfying the assumptions of this lemma and set . Let and be the th block of writes and reads in , respectively. Let be the vertices of corresponding to , and be the vertices corresponding to . It suffices to prove that for each , the probability that there are fewer than edges between and is less than . If this is true then by the union bound the lemma follows.
For contradiction, assume there exists such that the probability that there are fewer than edges between and is . Here, the randomness is taken over the choice of an input sequence and the internal randomness of . Fix such an . Fix all the randomness except for the choice of in so that obtained from this restricted distribution has fewer than edges between and with probability over the choice of . (This is possible by an averaging argument.) Let be the set of choices for which give fewer than edges between and in . Clearly, .
We use to construct a deterministic protocol that transmits any string from from Alice to Bob, two communicating parties, using at most bits. That gives a contradiction as such an efficient transmission violates the pigeonhole principle.
On input to Alice, Alice sends a single message to Bob who can determine from the message. They proceed as follows. Both Alice and Bob simulate on up until reaching . All the randomness used before the th block of writes is fixed and known both to Alice and Bob. Then Alice continues with the simulation of on with data set to . Once she finishes it, she sends the content of the internal memory of to Bob using bits. Then Alice continues with the simulation of on and whenever makes a server probe to read from a location that was written last time during the simulation of , Alice sends over the address and the content of that cell to Bob. Overall, Alice sends at most bits of communication to Bob that can be concatenated into a single message of this size.
On receiving side, Bob uses the internal state of communicated by Alice to continue with the computation on , while he uses the state of the server he obtained initially before reaching . He simulates all server probes by himself, except for read operations that match the list sent by Alice, where he initially uses the content provided by Alice. Clearly, Bob can determine from the simulation.
As , , so , hence, the number of communicated bits is , which is a contradiction. ∎
Using good errorcorrecting codes [MS77], this lemma could be generalized to the case when implements Array Maintenance problem with probability , i.e., is allowed to return a wrong value for each of its input read operations with a small constant probability . The graph would still have dense partition with probability for some which depends only on the allowed failure probability .
Lemma 4 shows that there is an input sequence such that the corresponding access graph has dense partition for some . We show that by statistical security of , this property holds for a single input sequence and many different values of .
Lemma .
Let be as in the beginning of this section, and assume is even and . Let be an input sequence to of length . If is a statistically secure online ORAM then for every
Proof.
For contradiction, suppose that for some the probability is less than . From the statistical security of we know that the statistical distance . By Lemma 4, has an dense partition with probability at least . Define a function on ordered graphs that is an indicator of having an dense partition. Applying Proposition 2 with , , and , we can conclude that has an dense partition with probability at least . ∎
We are ready to prove our main theorem for statistically secure ORAM.
Theorem 4.1.
There are constants such that for any integers and where and , any statistically secure online ORAM with address range , cell size bits and cells of internal memory must perform at least server probes in expectation (the expectation is over the randomness of ) on any input sequence of length .
Proof.
Fix an ORAM machine . Consider any input sequence to of length . By Lemma 4 for every , such that , we get that
Applying Corollary 3 with , , , and , we can lower bound the expected number of edges in by
For , . Hence, the expected number of edges in is at least , provided is large enough. Since the indegree of each vertex of an access graph is at most one, the expected number of vertices in , which is the same as the expected number of probes in , is at least . ∎
Next we prove lower bound for computationally secure ORAM.
Theorem 4.2.
Let be nondecreasing functions such that for all large enough: and . Let be a sequence of online ORAM’s with address range , cell size bits and cells of internal memory that is computationally secure. Let be a sequence of input sequences where , for each .
For any constant there is a constant , such that for any , must perform in expectation at least server probes on the input sequence .
In particular there is no computationally secure online ORAM with constant overhead .
Proof.
For each , define to be the smallest such that has dense partition. Using Corollary 3 we get for each large enough that the expected number of edges in is at least , for some absolute constant . It suffices to show that as . There cannot exist a constant such that has dense partition with probability less than for infinitely many . Otherwise would be computationally distinguishable from (by the greedy algorithm which has hardwired). So, as . ∎
5 Alternative Definitions for Oblivious RAM
In this section, we recall some alternative definitions for ORAM which appeared in the literature and explain the relation of our lower bound to those models.
The definition of Larsen and Nielsen.
Larsen and Nielsen [LN18] required that for any two input sequences of equal length, the corresponding distributions of access sequences cannot be distinguished with probability greater than by any algorithm running in polynomial time in the sum of the following terms: the length of the input sequence, logarithm of the number of memory cells (i.e., ), and the size of a memory cell (i.e., for the most natural parameters). We show that their definition implies statistical closeness as considered in our work (see the statistical security property in Definition 2.1). Therefore, any lower bound on the bandwidth overhead of ORAM satisfying our definition implies a matching lower bound w.r.t. the definition of Larsen and Nielsen [LN18].
To this end, let us show that if two distributions of access sequences are not statistically close, then they are distinguishable in the sense of Larsen and Nielsen. Assume there exist two input sequences and of equal lengths, for which the access sequences and have statistical distance greater than . We define a distinguisher algorithm that on access sequence outputs whenever , outputs whenever , and outputs a uniformly random bit whenever . It follows from definition of , basic properties of statistical distance (see section 2), and our assumption about the statistical distance of and that
Note that is allowed to have all information about the distributions and hardwired. Thus, can run in linear time in the length of the access sequence (which is polynomial in the length of the input sequence) and distinguishes the two access sequences with probability greater than .
The definition of Goldreich and Ostrovsky.
The definition of Goldreich and Ostrovsky (Definition 2.9 in [GO96]) suffers from an issue which, to the best of our knowledge, was not pointed out in the literature. In particular, the definition can be satisfied almost trivially without any indistinguishability of the access sequences. Recall that the GoldreichOstrovsky definition requires that for any two input sequences and , if the length distributions and are identical, then and are identical as well. As we show, this requirement can be satisfied by creating an ORAM that makes sure that on any two distinct sequences , the length distributions and are different (note that no indistinguishability is required in that case).
In particular, we create an ORAM with a constant overhead so that and the distribution encodes the sequence . The ORAM proceeds by performing every operation directly on the server followed by a read operation from address 0. After the last instruction in , the ORAM selects a random sequence of operations of length and if is lexicographically smaller than then the ORAM performs an extra read from address 0. Note that this ORAM can be efficiently implemented using constant amount of internal memory by comparing the input sequence to the randomly selected one online. Also, the machine does not need to know the length of the sequence in advance. Finally, the length distribution is clearly different for each input sequence . Given that the above definition allows an almost trivial ORAM with constant overhead, we do not hope to extend our lower bound towards this definition.
Simulationbased definitions.
The recent work of Asharov et al. [AKL18] employs a simulationbased definition. Specifically, their definition is more general than the one employed in this work and we currently do not know how to transfer the information transfer technique into their setting.
Our definition of computational ORAM.
Larsen and Nielsen claimed that their lower bound for statistically secure online ORAM implies a matching lower bound for computationally secure online ORAM. We see two caveats in this claim.
First, a straightforward extension of the definition of Larsen and Nielsen that considers only pairs of sequences and introduces a security parameter would allow solutions that can have bandwidth overhead which might eventually be . In other words, the growing security parameter eventually allows achieving identical memory access by scanning the whole memory. Therefore, in our Definition 2.1, we extend the definition of statistically secure online ORAM into the computational setting by considering infinite families of input sequences.
Second, we currently do not know how to extend our techniques from the statistical regime into the computational setting with a matching bound. Our technique gives an lower bound for the bandwidth overhead of computationally secure online ORAM. We leave it as an intriguing open problem whether it is possible to prove an lower bound for online ORAM w.r.t. our definition of computational security.
However, note that the lower bound is still an interesting result. It follows from the work of Boyle and Naor [BN16] that any superconstant lower bound for offline ORAM would imply superlinear lower bounds on size of sorting circuits – which would constitute a major breakthrough in computational complexity. Boyle and Naor [BN16] proved the following theorem (rephrased using our notation).
Theorem 5.1 (Theorem 3.1 [Bn16]).
Suppose there exists a Boolean circuit ensemble of size , such that each takes as input words each of size bits, and outputs the words in sorted order. Then for word size and constant internal memory , there exists a secure offline ORAM (as per Definition 2.8 [BN16]) with total bandwidth and computation .
Moreover, the additive factor of follows from the transpose part of the algorithm of [BN16] (see Figures 1 and 2 in [BN16]). As Boyle and Naor showed in their appendix (Remark B.3 [BN16]) this additive factor in total bandwidth may be reduced to if the size of internal memory is . Thus, sorting circuit of size implies offline ORAM with total bandwidth . Or the other way around lower bound for total bandwidth of offline ORAM implies lower bound for circuits sorting words of size bits, each.
References

[Ajt10]
Miklós Ajtai.
Oblivious RAMs without cryptographic assumptions.
In
Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 58 June 2010
, pages 181–190, 2010.  [AKL18] Gilad Asharov, Ilan Komargodski, WeiKai Lin, Kartik Nayak, and Elaine Shi. Optorama: Optimal oblivious RAM. IACR Cryptology ePrint Archive, 2018:892, 2018.
 [BN16] Elette Boyle and Moni Naor. Is there an oblivious RAM lower bound? In Proceedings of the 2016 ACM Conference on Innovations in Theoretical Computer Science, Cambridge, MA, USA, January 1416, 2016, pages 357–368, 2016.
 [CLP14] KaiMin Chung, Zhenming Liu, and Rafael Pass. Statisticallysecure ORAM with õ(log n) overhead. In Advances in Cryptology  ASIACRYPT 2014  20th International Conference on the Theory and Application of Cryptology and Information Security, Kaoshiung, Taiwan, R.O.C., December 711, 2014, Proceedings, Part II, pages 62–81, 2014.
 [CP13] KaiMin Chung and Rafael Pass. A simple ORAM. IACR Cryptology ePrint Archive, 2013:243, 2013.
 [DMN11] Ivan Damgård, Sigurd Meldgaard, and Jesper Buus Nielsen. Perfectly secure oblivious RAM without random oracles. In Theory of Cryptography  8th Theory of Cryptography Conference, TCC 2011, Providence, RI, USA, March 2830, 2011. Proceedings, pages 144–163, 2011.
 [GGH13] Craig Gentry, Kenny A. Goldman, Shai Halevi, Charanjit S. Jutla, Mariana Raykova, and Daniel Wichs. Optimizing ORAM and using it efficiently for secure computation. In Privacy Enhancing Technologies  13th International Symposium, PETS 2013, Bloomington, IN, USA, July 1012, 2013. Proceedings, pages 1–18, 2013.
 [GM11] Michael T. Goodrich and Michael Mitzenmacher. Privacypreserving access of outsourced data via oblivious RAM simulation. In Automata, Languages and Programming  38th International Colloquium, ICALP 2011, Zurich, Switzerland, July 48, 2011, Proceedings, Part II, pages 576–587, 2011.
 [GMOT11] Michael T. Goodrich, Michael Mitzenmacher, Olga Ohrimenko, and Roberto Tamassia. Oblivious RAM simulation with efficient worstcase access overhead. In Proceedings of the 3rd ACM Cloud Computing Security Workshop, CCSW 2011, Chicago, IL, USA, October 21, 2011, pages 95–100, 2011.
 [GO96] Oded Goldreich and Rafail Ostrovsky. Software protection and simulation on oblivious RAMs. J. ACM, 43(3):431–473, 1996.
 [JLN19] Riko Jacob, Kasper Green Larsen, and Jesper Buus Nielsen. Lower bounds for oblivious data structures. In Proceedings of the Thirtieth Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 69, 2019, pages 2439–2447, 2019.
 [KGG18] Paul Kocher, Daniel Genkin, Daniel Gruss, Werner Haas, Mike Hamburg, Moritz Lipp, Stefan Mangard, Thomas Prescher, Michael Schwarz, and Yuval Yarom. Spectre attacks: Exploiting speculative execution. CoRR, abs/1801.01203, 2018.
 [KLO12] Eyal Kushilevitz, Steve Lu, and Rafail Ostrovsky. On the (in)security of hashbased oblivious RAM and a new balancing scheme. In Proceedings of the TwentyThird Annual ACMSIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 1719, 2012, pages 143–156, 2012.
 [LN18] Kasper Green Larsen and Jesper Buus Nielsen. Yes, there is an oblivious RAM lower bound! In Advances in Cryptology  CRYPTO 2018  38th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 1923, 2018, Proceedings, Part II, pages 523–542, 2018.
 [LSG18] Moritz Lipp, Michael Schwarz, Daniel Gruss, Thomas Prescher, Werner Haas, Anders Fogh, Jann Horn, Stefan Mangard, Paul Kocher, Daniel Genkin, Yuval Yarom, and Mike Hamburg. Meltdown: Reading kernel memory from user space. In 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, August 1517, 2018., pages 973–990, 2018.
 [MS77] F.J. MacWilliams and N.J.A. Sloane. The Theory of ErrorCorrecting Codes. NorthHolland, Amsterdam, 1977.
 [PD06] Mihai Patrascu and Erik D. Demaine. Logarithmic lower bounds in the cellprobe model. SIAM J. Comput., 35(4):932–963, 2006.
 [PPRY18] Sarvar Patel, Giuseppe Persiano, Mariana Raykova, and Kevin Yeo. Panorama: Oblivious RAM with logarithmic overhead. In 59th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2018, Paris, France, October 79, 2018, pages 871–882, 2018.
 [RFK14] Ling Ren, Christopher W. Fletcher, Albert Kwon, Emil Stefanov, Elaine Shi, Marten van Dijk, and Srinivas Devadas. Ring ORAM: closing the gap between small and large client storage oblivious RAM. IACR Cryptology ePrint Archive, 2014:997, 2014.
 [SvDS18] Emil Stefanov, Marten van Dijk, Elaine Shi, T.H. Hubert Chan, Christopher W. Fletcher, Ling Ren, Xiangyao Yu, and Srinivas Devadas. Path ORAM: an extremely simple oblivious RAM protocol. J. ACM, 65(4):18:1–18:26, 2018.
 [Vad99] Salil Pravin Vadhan. A Study of StatisticalZero Knowledge Proofs. PhD thesis, Massachusetts Institute of Technology, 9 1999.
 [WCS15] Xiao Wang, T.H. Hubert Chan, and Elaine Shi. Circuit ORAM: on tightness of the goldreichostrovsky lower bound. In Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, Denver, CO, USA, October 1216, 2015, pages 850–861, 2015.
 [WHC14] Xiao Shaun Wang, Yan Huang, T.H. Hubert Chan, Abhi Shelat, and Elaine Shi. SCORAM: oblivious RAM for secure computation. In Proceedings of the 2014 ACM SIGSAC Conference on Computer and Communications Security, Scottsdale, AZ, USA, November 37, 2014, pages 191–202, 2014.
Comments
There are no comments yet.