1 Introduction
One of the central problems in theoretical computer science is proving lower bounds in various models of computation such as circuits and data structures. Proving superlinear size lower bounds for circuits even when their depth is restricted is rather elusive. Similarly, proving polynomial lower bounds on query time for certain static data structure problems seems out of reach. To deal with this situation researchers developed various conjectures which if true would imply the sought after lower bounds. In this paper, we investigate the relative power of some of those conjectures. We establish a connection between the Network Coding Conjecture (NCC) of Li and Li [24] used recently to prove various lower bounds such as lower bounds on circuit size counting multiplication [3] and a number of IO operations for external memory sorting [12].
Another problem researchers looked at is a certain data structure type problem for function inversion [18] which is popular in cryptography. CorriganGibbs and Kogan [9] observed that lower bounds for the function inversion problem imply lower bounds for logarithmic depth circuits. In this paper we establish new connections between the problems, and identify some interesting instances. Building on the work of Afshani et al. [3] we show that the Network Coding Conjecture implies certain weak lower bounds for the inversion data structure problems. That in turn implies the same type of circuit lower bounds as given by CorriganGibbs and Kogan [9]
. We show that similar results apply to a host of other data structure problems such as the wellstudied polynomial evaluation problem or the Finite Field Fourier transform problem. CorriganGibbs and Kogan
[9] gave their circuit lower bound for certain apriori undetermined function. We establish the same circuit lower bounds for sorting integers which is a very explicit function. Similarly, we establish a connection between data structure for polynomial evaluation and circuits for multipoint polynomial evaluation. Our results sharpen and generalize the picture emerging in the literature.The data structure problems we consider in this paper are for static, nonadaptive, systematic data structure problems, a very restricted class of data structures for which lower bounds should perhaps be easier to obtain. Data structure problems we consider have the following structure: Given the input data described by bits, create a data structure of size . Then we receive a single query from a set of permissible queries and we are supposed to answer the query while nonadaptively inspecting at most locations in the data structure and in the original data. The nonadaptivity means that the inspected locations are chosen only based on the query being answered but not on the content of the inspected memory. We show that when , polynomial lower bounds on for certain problems would imply superlinear lower bounds on logdepth circuits for computing sorting, multipoint polynomial evaluation, and other problems.
We show that logarithmic lower bounds on for the data structures can be derived from the Network Coding Conjecture even in the more generous setting of and when inspecting locations in the data structure is for free. This matches the lower bounds of Afshani [3] for certain circuit parameters derived from the Network Coding Conjecture. One can recover the same type of result they showed from our connection between the Network Coding Conjecture, data structure lower bounds, and circuit lower bounds.
In this regard, the Network Coding Conjecture seems the strongest among the conjectures, which is the hardest to prove. One would hope that for the strongly restricted data structure problems, obtaining the required lower bounds should be within our reach.
Organization. This paper is organized as follows. In the next section we review the data structure problems we consider. Then we provide a precise definition of Network Coding Conjecture in Section 3. Section 4 contains the statement of our main results. In Sections 5 and 6 we prove our main result for the function inversion and the polynomial problems. In Section 7 we discus the connection between data structure and circuit lower bounds for explicit functions.
2 Data Structure Problems
In this paper, we study lower bounds on systematic data structures for various problems – function inversion, polynomial evaluation, and polynomial interpolation. We are given an input , where each or each is an element of some field . First, a data structure algorithm can preprocess to produce an advice string of bits (we refer to the parameter as space of the data structure ). Then, we are given a query and the data structure should produce a correct answer (what is a correct answer depends on the problem). To answer a query , the data structure has access to the whole advice string and can make queries to the input , i.e., read at most elements from . We refer to the parameter as query time of the data structure.
We consider nonuniform data structures as we want to provide connections between data structures and nonuniform circuits. Formally, a nonuniform systematic data structure for an input is a pair of algorithms with oracle access to . The algorithm produces the advice string . The algorithm with inputs and a query outputs a correct answer to the query with at most oracle queries to . The algorithms and can differ for each .
2.1 Function Inversion
In the function inversion problem, we are given a function and a point and we want to find such that . This is a central problem in cryptography as many cryptographic primitives rely on the existence of a function that is hard to invert. To sum up we are interested in the following problem.
Function Inversion  

Input:  A function as an oracle. 
Preprocessing:  Using , prepare an advice string . 
Query:  Point . 
Answer:  Compute the value , with a full access to and using at most queries to the oracle for . 
We want to design an efficient data structure, i.e., make and as small as possible. There are two trivial solutions. The first one is that the whole function is stored in the advice string , thus and . The second one is that the whole function is queried during answering a query , thus and . Note that the space of the data structure is the length of the advice string in bits, but with one oraclequery the data structure reads the whole , thus with oraclequeries we read the whole description of , i.e., bits.
The question is whether we can design a data structure with . Hellman [18]
gave the first nontrivial solution and introduced a randomized systematic data structure which inverts a function with a constant probability (over the uniform choice of the function
and the query ) and and . Fiat and Naor [13] improved the result and introduced a data structure that inverts any function at any point, however with a slightly worse tradeoff: . Hellman [18] also introduced a more efficient data structure for inverting a permutation – it inverts any permutation at any point and . Thus, it seems that inverting a permutation is an easier problem than inverting an arbitrary function.In this paper, we are interested in lower bounds for the inversion problem. Yao [35] gave a lower bound that any systematic data structure for the inversion problem must have , however, the lower bound is applicable only if . Since then, only slight progress was made. De et al. [10] improved the lower bound of Yao [35] that it is applicable for the full range of . Abusalah et al. [1] improved the tradeoff, that for any it must hold that . Seemingly, their result contradicts Hellman’s tradeoff as it implies for any . However, for Hellman’s attack [18] we need that the function can be efficiently evaluated and the functions introduced by Abusalah et al. [1] cannot be efficiently evaluated. There is also a series of papers [16, 29, 11, 8] which study how the probability of successful inversion depends on the parameters and . However, none of these results yields a better lower bound than . Hellman’s tradeoff is still the best known upper bound tradeoff for the inversion problem. Thus, there is still a substantial gap between the lower and upper bounds.
Another caveat of all known data structures for the inversion is that they heavily use adaptivity during answering queries . I.e., queries to the oracle depend on the advice string and answers to the oracle queries which have been already made. We are interested in nonadaptive data structures. We say a systematic data structure is nonadaptive if all oracle queries depend only on the query .
As nonadaptive data structures are weaker than adaptive ones, there is a hope that for nonadaptive data structures we could prove stronger lower bounds. Moreover, the nonadaptive data structure corresponds to circuits computation [30, 31, 33, 9]. Thus, we can derive a circuit lower bound from a strong lower bound for a nonadaptive data structure. Nonadaptive data structures were considered by CorriganGibbs and Kogan [9]. They proved that improvement by a polynomial factor of Yao’s lower bound [35] for nonadaptive data structures would imply the existence of a function for that cannot be computed by a linearsize and logarithmicdepth circuit. More formally, they prove that if a function cannot be inverted by a nonadaptive data structure of space and query time for some then there exists a function that cannot be computed by any circuit of size and depth . They interpret as numbers in , i.e, where each . The function is defined as where and . Informally, if the function is hard to invert at some points, then it is hard to invert at all points together. Moreover, they showed equivalence between function inversion and substring search. A data structure for the function inversion of space and query time yields a data structure of space and query time for finding pattern of length in a binary text of length and vice versa – an efficient data structure for the substring search would yield an efficient data structure for the function inversion. Compared to results of CorriganGibbs and Kogan [9], we provide an explicit function (sorting integers) which will require large circuits if any of the functions is hard to invert.
Another connection between data structures and circuits was made by Viola [34] who considered constant depth circuits with arbitrary gates.
2.2 Evaluation and Interpolation of Polynomials
In this section, we describe two natural problems connected to polynomials. We consider our problems over a finite field to avoid issues with encoding reals.
Polynomial Evaluation over  

Input:  Coefficients of a polynomial : (i.e., ) 
Preprocessing:  Using the input, prepare an advice string . 
Query:  A number . 
Answer:  Compute the value , with a full access to and using at most queries to the coefficients of . 
Polynomial Interpolation over  

Input:  Pointvalue pairs of a polynomial of degree at most : where for any two indices 
Preprocessing:  Using the input, prepare an advice string . 
Query:  An index . 
Answer:  Compute th coefficient of the polynomial , i.e., the coefficient of in , with a full access to and using at most queries to the oracle for pointvalue pairs. 
In the paper we often use a version of polynomial interpolation where the points are fixed in advance and the input consists just of . Since we are interested in lower bounds, this makes our results slightly stronger.
Let denote the Galois Field of elements. Let be a divisor of . It is a wellknown fact that for any finite field its multiplicative group is cyclic (see e.g. Serre [27]). Thus, there is an element of order in the multiplicative group (that is an element such that and for each , ). In other words, is our choice of primitive th root of unity. Pollard [26] defines the Finite Field Fourier transform (FFFT) (with respect to ) as a linear function which satisfies:
for any 
The inversion is given by:
for any 
Note that if we work over a finite field , our might not be an element of . For simplicity we slightly abuse the notation and use . In our theorems we always set to be a divisor of thus modulo is nonzero and the inverse exists. Observe, that . Hence, FFFT is the finite field analog of Discrete Fourier transform (DFT) which works over complex numbers.
The FFT algorithm by Cooley and Tukey [7] can be used for the case of finite fields as well (as observed by Pollard [26]) to get an algorithm using field operations (addition or multiplication of two numbers). Thus we can compute and its inverse in field operations.
It is easy to see that is actually evaluation of a polynomial in multiple special points (specifically in ). We can also see that it is a special case of interpolation by a polynomial in multiple special points since . We provide an NCCbased lower bound for data structures computing the polynomial evaluation. However, we use the data structure only for evaluating a polynomial in powers of a primitive root of unity. Thus, the same proof yields a lower bound for data structures computing the polynomial interpolation.
There is a great interest in data structures for polynomial evaluation in a cell probe model. In this model, some representation of a polynomial is stored in a table of cells, each of bits. Usually, is set to , that we can store an element of in a single cell. On a query the data structure should output making at most probes to the table . A difference between data structures in the cell probe model and systematic data structures is that a data structure in the cell probe model is charged for any probe to the table but a systematic data structure is charged only for queries to the input (the coefficients ), reading from the advice string is for free. Note that, the coefficients of do not have to be even stored in the table . There are again two trivial solutions. The first one is that we store a value for each and on a query we probe just one cell. Thus, we would get and (we assume that we can store an element of in a single cell). The second one is that we store the coefficients of and on a query we probe all cells and compute the value . Thus, we would get .
Let . Kedlaya and Umans [21] provided a data structure for the polynomial evaluation that uses space and query time . Note that, is the size of the input and is the size of the output.
The first lower bound for the cell probe model was given by Miltersen [25]. He proved that for any cell probe data structure for the polynomial evaluation it must hold that . This was improved by Larsen [22] to , that gives if the data structure uses linear space . However, the size of has to be superlinear, i.e., . Data structures in a bit probe model were studied by Gál and Miltersen [14]. The bit probe model is the same as the cell probe model but each cell contains only a single bit, i.e., . They studied succinct data structures that are data structures such that for . Thus, the succinct data structures are related to systematic data structures but still, the succinct data structures are charged for any probe (as any other data structure in the cell probe model). Note that a succinct data structure stores only a few more bits than it is needed due to informationtheoretic requirement. Gál and Miltersen [14] showed that for any succinct data structure in the bit probe model it holds that . We are not aware of any lower bound for systematic data structures for the polynomial evaluation.
Larsen et al. [23] also gives a logsquared lower bound for dynamic data structures in the cell probe model. Dynamic data structures also support updates of the polynomial .
There is a great interest in algorithmic questions about the polynomial interpolation such as how fast we can interpolate polynomials [15, 5, 17], how many queries we need to interpolate a polynomial if it is given by oracle [6, 19], how to compute the interpolation in a numerically stable way over infinite fields [28] and many others. However, we are not aware of any results about data structures for the interpolation, i.e., when the interpolation algorithm has an access to some precomputed advice.
3 Network Coding
We prove our conditional lower bounds based on the Network Coding Conjecture. In network coding, we are interested in how much information we can send through a given network. A network consists of a graph , positive capacities of edges and pairs of vertices . We say a network is undirected or directed (acyclic) if the graph is undirected or directed (acyclic). We say a network is uniform if the capacities of all edges in the network equal to some and we denote such network as .
A goal of a coding scheme for directed acyclic network is that at each target it will be possible to reconstruct an input message which was generated at the source . The coding scheme specifies messages sent from each vertex along the outgoing edges as a function of received messages. Moreover, the length of the messages sent along the edges have to respect the edge capacities.
More formally, each source of a network receives an input message
sampled (independently of the messages for the other sources) from the uniform distribution
on a set . Without loss of generality we can assume that each source has an indegree 0 (otherwise we can add a vertex and an edge and replace by ). There is an alphabet for each edge . For each source and each outgoing edge there is a function which specifies the message sent along the edge as a function of the received input message . For each nonsource vertex and each outgoing edge there is a similar function which specifies the message sent along the edge as a function of the messages sent to along the edges incoming to . Finally, each target has a decoding function . The coding scheme is executed as follows:
Each source receives an input message . Along each edge a message is sent.

When a vertex receives all messages along all incoming edges it sends along each outgoing edge a message . As the graph is acyclic, this procedure is welldefined and each vertex of nonzero outdegree will eventually send its messages along its outgoing edges.

At the end, each target computes a string where denotes the received messages along the incoming edges . We say the encoding scheme is correct if for all and any input messages .
The coding scheme has to respect the edge capacities, i.e., if
is a random variable that represents a message sent along the edge
, then , where denotes the Shannon entropy. A coding rate of a network is the maximum such that there is a correct coding scheme for input random variables where for all . A network coding can be defined also for directed cyclic networks or undirected networks but we will not use it here.Network coding is related to multicommodity flows. A multicommodity flow for an undirected network specifies flows for each commodity such that they transport as many units of commodity from to as possible. A flow of the commodity is specified by a function which describes for each pair of vertices how many units of the commodity are sent from to . Each function has to satisfy:

If are not connected by an edge, then .

For each edge , it holds that or .

For each vertex that is not the source or the target , it holds that what comes to the vertex goes out from the vertex , i.e.,

What is sent from the source arrives to the target , i.e.,
Moreover, all flows together have to respect the capacities, i.e., for each edge it must hold that . A flow rate of a network is the maximum such that there is a multicommodity flow that for each transports at least units of the commodity from to , i.e., for all , it holds that . A multicommodity flow for directed graphs is defined similarly, however, the flows can transport the commodities only in the direction of edges.
Let be a directed acyclic network of a flow rate . It is clear that for a coding rate of it holds that . As we can send the messages without coding and thus reduce the encoding problem to the flow problem. The opposite inequality does not hold: There is a directed network such that its coding rate is times larger than its flow rate as shown by Adler et al. [2]. Thus, the network coding for directed networks provides an advantage over the simple solution given by the maximum flow. However, such a result is not known for undirected networks. Li and Li [24] conjectured that the network coding does not provide any advantage for undirected networks, thus for any undirected network , the coding rate of equals to the flow rate of . This conjecture is known as Network Coding Conjecture (NCC) and we state a weaker version of it below.
For a directed graph we denote by the undirected graph obtained from by making each directed edge in undirected (i.e., replacing each by ). For a directed acyclic network we define the undirected network by keeping the sourcetarget pairs and capacities the same, i.e, .
[Weaker NCC] Let be a directed acyclic network, be a coding rate of and be a flow rate of . Then, .
4 NCC Implies Data Structure Lower Bounds
In this paper, we provide several connections between lower bounds for data structures and other computational models. The first connection is that NCC (Conjecture 3) implies lower bounds for data structures for the permutation inversion and the polynomial evaluation and interpolation. Assuming NCC, we show that a query time of a nonadaptive systematic data structure for any of the above problems satisfies , even if it uses linear space, i.e., the advice string has size for sufficiently small constant . Formally, we define as a query time of the optimal nonadaptive systematic data structure for the permutation inversion using space at most . Similarly, we define and for the polynomial evaluation and interpolation over .
[] Let be a sufficiently small constant. Assuming NCC, it holds that
.
[] Let be a field and be a divisor of . Let for a sufficiently small constant . Then assuming NCC, it holds that .
Note that by Theorem 4, assuming NCC, it holds that for and . The same holds for and by Theorem 4. Thus, these conditional lower bounds cross the barrier for given by the best unconditional lower bounds known for the function inversion [35, 10, 1, 16, 29, 11, 8] and the lower bound for the succinct data structures for the polynomial evaluation by Gál and Miltersen [14]. The lower bound by Larsen [22] says that any cell probe data structure for the polynomial evaluation using linear space needs at least logarithmic query time if the size of the field is of superlinear size in , i.e., . Then . The lower bound given by Theorem 4 says that assuming NCC a nonadaptive data structure needs to read at least logarithmically many coefficients of even if we know bits of information about the polynomial for free. Our lower bound holds also for linearsize fields.
To prove Theorems 4 and 4, we use the technique of Farhadi et al. [12]. The proof can be divided into two steps:

From a data structure for the problem we derive a network with edges such that admits an encoding scheme that is correct on a large fraction of the inputs. This step is distinct for each problem and the reductions are shown in Sections 5 and 6. This step uses new ideas and interestingly, it uses the data structure twice in a sequence.

If there is a network with edges that admits an encoding scheme which is correct for a large fraction of inputs, then This step is common to all the problems. It was implicitly proved by Farhadi et al. [12] and Afshani et al. [3]. For the sake of completeness, we give a proof of this step in Appendix A.
5 NCC Implies a Weak Lower Bound for the Function Inversion
In this section, we prove Theorem 4 that assuming NCC, any nonadaptive systematic data structure for the permutation inversion requires query time at least even if it uses linear space. Let be a data structure for inverting permutations of a linear space , for sufficiently small constant , with query time . Recall that is a query time of the optimal nonadaptive systematic data structure for the permutation inversion using space . From we construct a directed acyclic network and an encoding scheme of a coding rate . By Conjecture 3 we get that the flow rate of is as well. We prove that there are many sourcetarget pairs of distance at least . Since the number of edges of will be and flow rate of is , we are able to derive a lower bound .
We construct the network in two steps. First, we construct a network that admits an encoding scheme such that is correct only on a substantial fraction of all possible inputs. This might create correlations among messages received by the sources. However, to use the Network Coding Conjecture we need to have a coding scheme that is able to reconstruct messages sampled from independent distributions. To overcome this issue we use a technique introduced by Farhadi et al. [12] and from we construct a network that admits a correct encoding scheme.
Let be a directed acyclic network. Let each source receive a binary string of length as its input message, i.e., each . If we concatenate all input messages we get a string of length , thus the set of all possible inputs for an encoding scheme for corresponds to the set . We say an encoding scheme is correct on an input if it is possible to reconstruct all messages at appropriate targets. An encoding scheme is an encoding scheme which is correct on at least inputs in .
We say a directed network is long if for at least sourcetarget pairs , it holds that distance between and in is at least . Here, we measure the distance in the undirected graph , even though the network is directed. The following lemma is implicitly used by Farhadi et al. [12] and Afshani et al. [3]. We give its proof in Appendix A for the sake of completeness. [Implicitly used in [12, 3]] Let be a long directed acyclic uniform network for and sufficiently large . Assume there is an encoding scheme for for sufficiently small . Then assuming NCC, it holds that , where .
Now we are ready to prove a conditional lower bound for the permutation inversion. For the proof we use the following fact which follows from wellknown Stirling’s formula:
Fact 1.
The number of permutations is at least .
See 4
Proof.
Let be the optimal data structure for the inversion of permutation on using space . We set . We will construct a directed acyclic uniform network where . Let for sufficiently large so that we could apply Lemma 5. The network will admit an encoding scheme . The number of edges of will be at most and the network will be long for . Thus, by Lemma 5 we get that
from which we can conclude that . Thus, it remains to construct the network and the scheme .
First, we construct a graph which will yield the graph by deleting some edges. The graph has three layers of vertices: a source layer of sources , a middle layer of vertices and a target layer of vertices . The targets of will be assigned to the vertices later.
We add edges according to the data structure : Let be a set of oracle queries, which makes during the computation of , i.e., for each , it queries the oracle of for . As is nonadaptive, the sets are welldefined. For each and we add edges and . We set a capacity of all edges to . This finishes the construction of , see Fig. 1 for illustration of the graph .
The graph has exactly edges. Moreover, the vertices of the middle and the target layer have indegree at most as the incoming edges correspond to the oracle queries made by . However, some vertices of the source and the middle layer might have large outdegree, which is a problem that might prevent the network to be long. For example, the data structure could always query . Then, there would be edges and for all , hence all vertices would be at distance at most 4 in . So we need to remove edges adjacent to highdegree vertices. Let be the set of vertices of outdegree larger than . We remove all edges incident to from to obtain the graph . (For simplicity, we keep the degree 0 vertices in ). Thus, the maximum degree of is at most . Since the graph has edges, it holds that .
Now, we assign the targets of in such a way that is long. Let be the set of vertices of which have distance at most from in . Since the maximum degree of is at most and , for each , . In particular, for every source it holds that , i.e., there are at most vertices in the target layer at distance smaller than from . It follows from an averaging argument that there is an integer such that there are at least sources with distance at least from in . (Here the addition is modulo .) We fix one such and set . For large enough, it holds that . Thus, the network is long.
It remains to construct the encoding scheme for (see Fig. 1 for a sketch of the encoding ). Each source receives a number as an input message. We interpret the string of the input messages as a function. We define the function as . We will consider only those inputs which are pairwise distinct so that is a permutation.
At a vertex of the middle layer we want to compute using the data structure . To compute we need the appropriate advice string and answers to the oracle queries . We fix an advice string to some particular value which will be determined later, and we focus only on inputs which have the same advice string . In the vertex is connected exactly to the sources for , but some of those connections might be missing in . Thus for each such that , will be fixed to some particular value which will also be determined later. Each source sends the input along all outgoing edges incident to . Thus, at a vertex we know the answers to all oracle queries in . Recall that and each for was either fixed to or sent along the incoming edge . We also know the advice string as it was fixed. Therefore, we can compute at every vertex . Note that is the index of the source which received as an input message, i.e., if , then .
Now, we define another permutation as where the addition is modulo . Since is fixed, we can compute at each vertex . The goal is to compute at each vertex of the target layer. First, we argue that . The permutation maps an input message to the index . The permutation maps an input message to the index . Thus, the inverse permutation maps the index to the input message . If we are able to reconstruct at the target , then in fact we are able to reconstruct , the input message received by the source .
To reconstruct at the vertex we use the same strategy as for reconstructing at vertices . We use again , but this time for the function . Again, we fix the advice string of , and we fix to some for each vertex . Each vertex sends the value along all edges outgoing from . To compute we need values for all , which are known to the vertex . Again, they are either sent along the incoming edges or are fixed to . Since the value of the advice string is fixed, we can compute the value at the vertex .
The network is correct on all inputs which encode a permutation and which are consistent with the fixed advice strings and the fixed values to the degree zero vertices. Now, we argue that we can fix all the values so that there will be many inputs consistent with them. By Fact 1, there are at least inputs which encode a permutation. In order to make work, we fixed the following values:

Advice strings and , in total bits.

An input message for each source in and a value for each vertex in . Since and , we fix bits in total.
Overall, we fix at most bits. Thus, the fixed values divide the input strings into at most buckets. In each bucket all the input strings are consistent with the fixed values. We conclude that there is a choice of values to fix so that its corresponding bucket contains at least input strings which encode a permutation. We pick that bucket and fix the corresponding values. Thus, the scheme is encoding scheme, which concludes the proof. ∎
6 NCC Implies a Weak Lower Bound for the Polynomial Evaluation and Interpolation
In this section, we prove Theorem 4. The proof follows the blueprint of the proof of Theorem 4. The construction of a network from a data structure is basically the same. Thus, we mainly describe only an encoding scheme for .
See 4
Proof.
Let be the optimal nonadaptive systematic data structure for the evaluation of polynomials of degree up to over and using space . We set and for sufficently large . Again, we will construct a network from . To construct an encoding scheme for , we use entries of FFFT, i.e., we will evaluate polynomials of degree at most in powers of a primitive th root of the unity. Thus, we fix a primitive th root of unity , which we know exists, as discussed in Section 2.2.
We create a network from in the same way as we created in the proof of Theorem 4. By Lemma 5 we are able to conclude that . First, we create a graph of three layers and and we add edges to according to the queries of – on the vertex we will evaluate a polynomial in a point and on the vertex we will evaluate a polynomial in a point . Then, we create a graph from by removing edges incident to vertices in a set , which contains vertices of degree higher than . Finally, we set a shift and set in such a way that the network is long for .
Now, we desribe an encoding scheme for using . Each source receives an input message which we interpret as coefficients of a polynomial (that is ). Each source sends its input message along all outgoing edges from . Each vertex computes using . Again, we fix the advice string and the input messages for the sources in . Each vertex computes a value and sends it along all outgoing edges from . We define a new polynomial . We fix the advice string and the values for each vertex . Thus, each vertex can compute a value . We claim that .
The last equality is by noting that for and otherwise. Therefore, at each target we can reconstruct the input message . See Fig. 2 for a sketch of the scheme .
Again, we can fix values of advice strings and (at most fixed bits), input messages for each and value of for each (at most fixed bits) in such a way there is a set of inputs consistent with such fixing and . Therefore, the scheme is encoding scheme. This finishes the proof that .
Essentially, the same proof can be used to prove the lower bound for . Note that, the data structure is used only for evaluating some polynomials in powers of the primitive root , i.e., computing entries of . However as discussed in Section 2.2, it holds that
Comments
There are no comments yet.