The recent rise in cloud computing platforms has created an increasing demand for verifiable computing protocols. A commercial server that sells its computation power to the users, is incentivized to return false results, if by doing so it can reduce its computation overhead and oversell its services. Therefore, a user who delegates a computationally demanding task to a server must be able to verify that the results are indeed correct. Verifiable computing algorithms require the server to send a “proof” to the user along with the results of computation 
. By investigating this proof, a user is convinced with high probability that the server is honest. Clearly, a practical verifiable computing algorithm must be “doubly-efficient”: it must have a super-efficient verifier (user) and an efficient prover (server). Put differently, if the complexity of computing the original function is , the prover’s complexity must be comparable to , and the verifier’s complexity must be substantially smaller than .
The study of verifiable computing has led to novel cryptographic algorithms which are either applicable to arbitrary functions [15, 25] or tailored to computation of a specific function [13, 9]. The former has led to the development of Quadratic Arithmetic Programs and zkSNARKs for arithmetic circuits, while the latter has resulted in highly efficient and easy-to-implement algorithms that can be applied to, say, polynomial evaluation or matrix multiplication. Despite their enormous success, the security of cryptographic algorithms depends on hardness assumptions about certain mathematical problems. Algorithmic breakthroughs or technological advancements may falsify these assumptions at any time. This creates a demand for verifiable computing algorithms which remain secure in the face of computationally unbounded provers.
Our objective in this work is to design an information-theoretic algorithm for verifiable polynomial evaluation. Information-theoretic in this context means that the soundness of the algorithm is fundamental, and does not rely on hardness assumptions. An example of how such an algorithm can be applied in practice is blockchain networks where the capacity to verify a newly mined block against the history of the transactions is the distinguishing factor between a light node and a full node. This block verification process can generally be modelled as an instance of polynomial evaluation . The full nodes can thus convince the light nodes of the correctness of a block via a verifiable polynomial evaluation algorithm.
Our setup is very similar to the classical notion of interactive proof systems, with one subtle difference: we allow for a one-time initialization or pre-processing phase during which the verifier may perform a computationally heavy task. This is acceptable because in the verifiable computing literature, it is generally assumed [14, 9, 13] that after this initialization phase, the verifier and the prover will engage in evaluating the function at many inputs. Therefore, this initialization cost amortizes over many rounds and can be neglected.
1.1 Setting and Objective.
Consider a polynomial of degree , . A verifier wishes to evaluate this polynomial at with the help of a prover. Similar to , we require the algorithm to be doubly-efficient with a small round complexity. On the other hand, we deviate from the classical notion of proof systems and allow the verifier to perform a one-time computationally heavy task. We impose the following performance criteria on the algorithm.
Efficient initialization: the verifier is allowed to perform a one-time initialization phase and store the outcome for future reference. Although this phase is run only once, its complexity should be comparable to the complexity of computing .
Super-Efficient verifier: for each , the complexity of the verifier should be negligible compared to the complexity of computing .
Efficient prover: for each , the complexity of the prover should be comparable to the complexity of computing .
Small round complexity: the number of rounds of interaction between the prover and the verifier must be polylogarithmic in .
Completeness: if the prover is honest, the verifier should accept his results with probability .
(Information-theoretic) soundness: If the prover is dishonest, the verifier should be able to reject his results with probability at least .
1.2 Related Work.
An interactive proof (IP) system [17, 4] is an interactive protocol that enables a prover to convince a computationally-limited verifier of the correctness of a statement. It is well-known that IPs are very strong tools: any problem in PSPACE admits an interactive proof with polynomial complexity for the verifier [23, 31]. Nevertheless, the recent rise in cloud computing platforms has led to several intriguing questions surrounding the practicality of such algorithms. In particular, the provers in [23, 31] run in exponential time, making them unfit for real-world commercial servers. The concept of doubly-efficient interactive proofs was first introduced in  followed by [28, 27]. In this context, “doubly-efficient” means that the prover must run in polynomial time and the verifier must be “super-efficient”, i.e., his complexity must be close to linear in the size of the problem. Unfortunately, in many practical scenarios, even linear complexity is unacceptable for the verifier. Concrete examples of this are when the problem itself can be solved in linear time, however, due to the sheer size of the problem the verifier is incapable of performing the computation alone. In such cases, a high-degree polynomial-time prover is clearly impractical too.
A closely related line of work is Probabilistically Checkable Proofs (PCP) [6, 7, 3, 8, 26], where the prover is required to commit to a proof which is usually too long for the verifier to process. The verifier can then sample this proof randomly in a few locations and be convinced of the correctness of the proof with high probability. The celebrated PCP theorem [1, 2] states that any problem in NP admits a PCP with verifier complexity that is polylogarithmic in the size of the problem. However, from a practical perspective, it is not clear how this initial commitment can be implemented. One possibility is to rely on Merkle commitments with the help of collision-resistant hash functions and assume that the prover is computationally limited. Alternatively, the prover can send the entire proof to a trusted third-party which will be the point of contact for the verifier. However, these approaches alter the setup, and more importantly make strong assumptions such as the existence of trusted third parties or computational limitation of the prover. Other PCP-based proof systems that are secure against computationally limited provers include  and .
The notion of verifiable computing was introduced in . Motivated by practical considerations, in a verifiable computing setting one assumes that a computationally limited verifier (user) delegates a task to a prover (server). The prover must then cooperate with the verifier in computation of the function in such a way that the verifier remains efficient and convinced of the validity of the results. There are subtle differences between this model and the classical notion of proof systems. In particular, it is usually assumed that the verifier may run a one-time initialization phase that is computationally expensive. The cost of this computation is amortized over many runs of the algorithms, corresponding to the evaluation of the same function over many different inputs. In Gennaro’s framework , the function to be computed is characterized by its boolean circuit which is evaluated with the help of Yao’s garbled circuits [32, 21] and Fully Homomorphic Encryption. In a subsequent work [15, 25] which led to the development of several zkSNARKs (zero-knowledge non-interactive arguments of knowledge) [18, 22, 10, 30], it was suggested to represent arbitrary arithmetic and logical circuits as Quadratic Arithmetic Programs and Quadratic Span Programs which are then evaluated at encrypted values to produce proofs of correctness. These algorithms can be proven secure against provers who are not powerful enough to reverse such encryptions.
Recently, a new line of work in the cryptography community has focused on the verifiable evaluation of specific functions such as high-degree polynomials or multiplication of large matrices. These functions serve as building blocks for many applications such as machine learning and blockchain. Advances in this area has led to extremely efficient algorithms for verifiable polynomial evaluation[9, 13, 5, 12], matrix multiplication [33, 13], and modular exponentiation . Unfortunately, similar to the generic verifiable computing algorithms discussed above, these algorithms rely on unproven cryptographic assumptions. To overcome this limitations, recent efforts have focused on information-theoretic verifiable polynomial evaluation algorithms which are secure against unbounded adversarial provers . Nevertheless, to achieve information-theoretic security,  pays a significant cost in terms of the complexity of the verifier. While the cryptographic approaches lead to logarithmic or even constant verification time, the verifier in  runs in .
1.3 Main Results.
we design an information-theoretic verifiable polynomial evaluation algorithm which is super-efficient for the verifier and efficient for the prover. Even though our algorithm is interactive, the number of interactions only grows logarithmically as a function of the degree of the polynomial. Similar to other verifiable computing algorithms, in our setting the verifier performs a one-time initialization phase whose complexity is comparable to the complexity of evaluating the polynomial once.
There exists a public-coin verifiable polynomial evaluation algorithm with initialization complexity of , verifier complexity of , prover complexity of , and round complexity of which satisfies the completeness and soundness properties as defined in Section 1.1.
1.4 An Overview of the Algorithm.
Similar in nature to many other interactive proof systems [23, 6, 8], the core idea behind our algorithm is to enforce a dishonest prover to provide false results on problems that become progressively easier to verify throughout the interactions. To be more specific, let us look at a simple example where the polynomial is to be evaluated where is even. Suppose the prover provides a false answer . The protocol then requires the prover to provide his evaluation of two more functions, namely and , both evaluated at . Obviously, we must have , which can be easily checked by the verifier. Since , the prover must either provide or in order to pass the verification . Voilà, the prover has been forced to lie again. Note also that the degree of and is only . The verifier cloud then climb down the branches of this binary tree, and after rounds, verify the correctness of all the constant polynomials corresponding to the leaves of the tree. But this would take him to accomplish. Instead, he computes a random linear combination and uses this as the reference point for the next iteration. Provided that these random coefficients are selected over sufficiently large sets, we will have with high probability and the error will propagate to the next iteration of the algorithm. After iterations, the verifier is left with a constant polynomial that he needs to verify. We note that this algorithm is inspired by the Probabilistically Checkable Proof of Proximity recently proposed in  (despite this, our algorithm is clearly not a PCP but an IP).
As it turns out, the fact that the degree of the polynomial decreases by a factor of at each iteration does not immediately translate to an easier verification. Unfortunately, each coefficient of the newly constructed polynomial is now a linear combination of two coefficients in the original polynomial. Therefore, evaluating the new polynomial from scratch still takes . In particular, after iterations, we are left with a constant which is a (random) linear combination of all the coefficients of the original polynomial. To help the verifier ensure the validity of this linear combination, we allow for an initialization phase, whereby the verifier computes all the possible outcomes of the algorithm and stores them in a look-up table in his memory. For this to be feasible, we need the random coefficients to be selected from a set that is not too large. By properly choosing the size of this set, we guarantee that the look-up table is of size and that it can be computed in time . The verifier can then check the validity of the constant polynomial at the end of iterations by comparing the returned result to the corresponding entry in the look-up table.
2 Interactive Verifiable Polynomial Evaluation
In this section we describe an interactive algorithm for verifiable evaluation of a function over members of a finite field . The coefficients are arbitrary members of .
Let be an integer and let be such that . These two are design parameters that can depend on in general. Throughout this section, we will assume that is a constant, and that grows at most polylogarithmically with . Let be a set of size and let be of size . Let us represent the members of as . Without loss of generality, we can assume . The sets and are publicly known. For define
Note that for and we have if and if . Let . To simplify the matters, we assume is an integer (otherwise, we will set ). Iteratively, define the polynomials and , for and as follows. If then
As the starting point of the iteration, let for . Note that is a polynomial of degree as a function of . Furthermore, note that
Since is a constant function of , only depends on and not .
2.2 The Initialization Phase.
In the initialization phase, the verifier computes and stores in a look-up table for all possible . In total, the number of evaluation points is
We will now describe an algorithm for efficient computation of the look-up table. We will show that due to the recursive structure of , its evaluation at all points can be computed in .
2.2.1 Efficient Computation of the Look-up Table.
A naive approach to obtaining the entries of the look-up table is to compute them individually, which would take . However, due to the recursive structure of the function , we can be much more efficient. Specifically, let be the coefficient of in for each and all . We have the following recursion
for all . Furthermore, for , we define . When , the polynomial is of degree zero and is equal to . Therefore, we have
In order to compute , we start by computing a table of size that stores all , for and 111We are assuming that grows at most polylogarithmically with , so the complexity of computing will be negligible compared to the overall complexity of the initialization phase. More on the proper choice of in Section 3.1.. Afterwards, we iteratively compute the values of until we reach the leaves of the tree which give us . Since we have pre-computed the values of , finding each based on (10) only takes . At the level of the tree, we must compute for all and all . Therefore, the computation at level takes operations. As a result, the entire tree can be computed in . This procedure has been summarized in Algorithm 1. For an illustration, see Figure 1.
2.3 The Evaluation Phase.
The verifier is interested in evaluating . We assume that both and are publicly known. The prover sends to the verifier. If the prover is honest, then . Otherwise, it can be an arbitrary member of . The prover also sends the verifier for all , where . The verifier checks whether
If not, he rejects the result. Next, the verifier finds the polynomial of degree (in
) by interpolating the points, . He then chooses uniformly at random and finds . The verifier then sends to the prover. Next, the prover sends the verifier for all where . The verifier checks if
If not, he rejects the result. The algorithm now proceeds with taking the role of . This process continues until the prover sends the verifier The verifier checks this against the correct value of stored in his look-up table. If they are not equal, the result is rejected. To amplify the prover’s probability of failure, this entire algorithm is run times. If all the experiments pass, the verifier accepts the result. We will see in Section 3 that the proper choice of is . This process has been summarized in Algorithm 2 and illustrated in Figure 2. For notational simplicity, the algorithm is described as consecutive rounds. But the rounds can be trivially parallelized to reduce the overall round complexity to .
3 Performance Analysis
Completeness: If the prover is honest, he will provide the correct and the correct values of for all which will clearly pass all the verification tests.
(Information-theoretic) soundness: Soundness follows from the simple principle that two distinct polynomials of degree must disagree on at least points on any set of size . Suppose the prover starts by sending the verifier the wrong value of . In order to pass (12), he must then provide the verifier with at least one wrong value . Because of this, the polynomial , as a function of , will be distinct from the correct polynomial . Due to the observation above, these two polynomials will differ on at least members of . Therefore, if is chosen uniformly at random over , the value of will be different from the correct value with probability at least . With high probability, the error continues to propagate through the interactions between the verifier and the prover, until it reaches level , at which point, the verifier can detect it by checking it against its stored value . The probability that an adversarial prover can successfully pass all the verifications is bounded by
The verifier proceeds to run the experiment times and rejects the result if any of the experiments fail. We want to choose such that . By choosing , we will have
Computational Complexity of the Initialization: Based on the analysis in Section 2.2.1, the initialization phase can be done in time
Computational Complexity of the Verifier: The verifier runs (parallel) experiments, each consisting of rounds. Each round takes operations. Therefore, the overall complexity of the verifier is
Computational Complexity of the Prover: An honest prover can also pre-compute the coefficients of the polynomials in its own initialization phase and store them locally in order to reduce its complexity. In round , the prover must evaluate polynomials each of degree . Therefore, the prover only needs to perform computation 222Even in the absence of an initialization phase for the prover, the complexity remains for each of the rounds, resulting in an overall complexity of . This is still of the form for the choice of and in Section 3.1..
Round Complexity of the Algorithm: As mentioned in Section 2.3, we can run all the experiments in parallel. As a result, the round complexity of the algorithm is only
3.1 The Proper Choice of the Parameters.
We have two parameters to play with, namely and . Firstly, we want to make sure that the initialization phase can be done in . For this purpose, we choose for an arbitrary real number . We also fix , to be a constant. To see why , note that
Our second criterion is to achieve . This requirement is automatically satisfied with the above choices of and .
where is a constant. Finally, the complexity of the prover is given by
and the round complexity is
4 Discussion: Extending the Results to Multivariate Polynomials
Consider an -variate polynomial of degree in each variable
A verifier wishes to evaluate this polynomial at with the help of a prover. A simple variation of Algorithm 2 can be used for this purpose. First, the verifier treats as a univariate polynomial in and applies Algorithm 2 in order to reduce the degree of to zero after interactions with the prover. Now, he is left with a new polynomial that only has variables. After rounds, the number of variables will reduce to zero, and the verifier will be left with a constant that he can check against a look-up table of size . It is easy to see that if we resort to this modified algorithm, all the results in Section 3 will remain valid, except must be replaced with (the number of terms in ). For instance, the complexity of the verifier will be
and the complexity of the prover will be
It is also noteworthy that any improvements in the multivariate case directly translates to a better univariate algorithm. To see why, consider an arbitrary univariate polynomial of degree . Without loss of generality, we can represent this polynomial as
Applying Algorithm 2 on results in a verifier complexity of which is the same as . If we can design an variate algorithm that achieves a verifier complexity smaller than , we can apply it on and improve upon the complexity of Algorithm 2 applied on .
The authors would like to thank Ali Rahimi for the fruitful discussions.
-  Arora, S., Lund, C., Motwani, R., Sudan, M., and Szegedy, M. Proof verification and the hardness of approximation problems. Journal of the ACM (JACM) 45, 3 (1998), 501–555.
-  Arora, S., and Safra, S. Probabilistic checking of proofs: A new characterization of np. Journal of the ACM (JACM) 45, 1 (1998), 70–122.
aszl o Babai, L., Fortnow, L., Levin, L., and Szegedy, M.
Checking computations in polylogarithmic time.
23rd Annual ACM Symposium on Theory of Computing(1991), ACM, pp. 21–31.
-  Babai, L. Trading group theory for randomness. In Proceedings of the seventeenth annual ACM symposium on Theory of computing (1985), ACM, pp. 421–429.
-  Backes, M., Fiore, D., and Reischuk, R. M. Verifiable delegation of computation on outsourced data. In Proceedings of the 2013 ACM SIGSAC conference on Computer & communications security (2013), ACM, pp. 863–874.
-  Ben-Sasson, E., Bentov, I., Horesh, Y., and Riabzev, M. Fast Reed-Solomon interactive oracle proofs of proximity. In 45th International Colloquium on Automata, Languages, and Programming (ICALP 2018) (2018), Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
-  Ben-Sasson, E., Chiesa, A., and Spooner, N. Interactive oracle proofs. In Theory of Cryptography Conference (2016), Springer, pp. 31–60.
-  Ben-Sasson, E., and Sudan, M. Short PCPs with polylog query complexity. SIAM Journal on Computing 38, 2 (2008), 551–607.
-  Benabbas, S., Gennaro, R., and Vahlis, Y. Verifiable delegation of computation over large datasets. In Annual Cryptology Conference (2011), Springer, pp. 111–131.
-  Bitansky, N., Chiesa, A., Ishai, Y., Paneth, O., and Ostrovsky, R. Succinct non-interactive arguments via linear interactive proofs. In Theory of Cryptography Conference (2013), Springer, pp. 315–333.
-  Chen, X., Li, J., Ma, J., Tang, Q., and Lou, W. New algorithms for secure outsourcing of modular exponentiations. IEEE Transactions on Parallel and Distributed Systems 25, 9 (2014), 2386–2396.
-  Elkhiyaoui, K., Önen, M., Azraoui, M., and Molva, R. Efficient techniques for publicly verifiable delegation of computation. In Proceedings of the 11th ACM on Asia Conference on Computer and Communications Security (2016), ACM, pp. 119–128.
-  Fiore, D., and Gennaro, R. Publicly verifiable delegation of large polynomials and matrix computations, with applications. In Proceedings of the 2012 ACM conference on Computer and communications security (2012), ACM, pp. 501–512.
-  Gennaro, R., Gentry, C., and Parno, B. Non-interactive verifiable computing: Outsourcing computation to untrusted workers. In Annual Cryptology Conference (2010), Springer, pp. 465–482.
-  Gennaro, R., Gentry, C., Parno, B., and Raykova, M. Quadratic span programs and succinct NIZKs without PCPs. In Annual International Conference on the Theory and Applications of Cryptographic Techniques (2013), Springer, pp. 626–645.
-  Goldwasser, S., Kalai, Y. T., and Rothblum, G. N. Delegating computation: interactive proofs for muggles. Journal of the ACM (JACM) 62, 4 (2015), 27.
-  Goldwasser, S., Micali, S., and Rackoff, C. The knowledge complexity of interactive proof systems. SIAM Journal on computing 18, 1 (1989), 186–208.
-  Groth, J. On the size of pairing-based non-interactive arguments. In Annual International Conference on the Theory and Applications of Cryptographic Techniques (2016), Springer, pp. 305–326.
-  Kilian, J. Founding cryptography on oblivious transfer. In Proceedings of the twentieth annual ACM symposium on Theory of computing (1988), ACM, pp. 20–31.
-  Li, S., Yu, M., Avestimehr, S., Kannan, S., and Viswanath, P. PolyShard: Coded sharding achieves linearly scaling efficiency and security simultaneously. arXiv preprint arXiv:1809.10361 (2018).
-  Lindell, Y., and Pinkas, B. A proof of security of Yao’s protocol for two-party computation. Journal of Cryptology 22, 2 (2009), 161–188.
-  Lipmaa, H. Succinct non-interactive zero knowledge arguments from span programs and linear error-correcting codes. In International Conference on the Theory and Application of Cryptology and Information Security (2013), Springer, pp. 41–60.
-  Lund, C., Fortnow, L., Karloff, H., and Nisan, N. Algebraic methods for interactive proof systems. In Proceedings  31st Annual Symposium on Foundations of Computer Science (1990), IEEE, pp. 2–10.
-  Micali, S. Computationally Sound proofs. In Proceedings 35th Annual Symposium on Foundations of Computer Science (1994), IEEE, pp. 436–453.
-  Parno, B., Howell, J., Gentry, C., and Raykova, M. Pinocchio: Nearly practical verifiable computation. In 2013 IEEE Symposium on Security and Privacy (2013), IEEE, pp. 238–252.
-  Polishchuk, A., and Spielman, D. A. Nearly-linear size holographic proofs. In Proceedings of the twenty-sixth annual ACM symposium on Theory of computing (1994), ACM, pp. 194–203.
-  Reingold, O., Rothblum, G. N., and Rothblum, R. D. Constant-round interactive proofs for delegating computation. In Proceedings of the forty-eighth annual ACM symposium on Theory of Computing (2016), ACM, pp. 49–62.
-  Rothblum, G. N., Vadhan, S., and Wigderson, A. Interactive proofs of proximity: delegating computation in sublinear time. In Proceedings of the forty-fifth annual ACM symposium on Theory of computing (2013), ACM, pp. 793–802.
-  Sahraei, S., and Avestimehr, A. S. INTERPOL: Information theoretically verifiable polynomial evaluation. arXiv preprint arXiv:1901.03379 (2019).
-  Sasson, E. B., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., and Virza, M. Zerocash: Decentralized anonymous payments from bitcoin. In 2014 IEEE Symposium on Security and Privacy (2014), IEEE, pp. 459–474.
-  Shamir, A. IP= PSPACE (interactive proof= polynomial space). In Proceedings  31st Annual Symposium on Foundations of Computer Science (1990), IEEE, pp. 11–15.
-  Yao, A. C. Protocols for secure computations. In Foundations of Computer Science, 1982. SFCS’08. 23rd Annual Symposium on (1982), IEEE, pp. 160–164.
-  Zhang, X., Jiang, T., Li, K.-C., Castiglione, A., and Chen, X. New publicly verifiable computation for batch matrix multiplication. Information Sciences (2017).