Cloud-based MPC with Encrypted Data

03/27/2018
by   Andreea B. Alexandru, et al.
University of Pennsylvania
0

This paper explores the privacy of cloud outsourced Model Predictive Control (MPC) for a linear system with input constraints. In our cloud-based architecture, a client sends her private states to the cloud who performs the MPC computation and returns the control inputs. In order to guarantee that the cloud can perform this computation without obtaining anything about the client's private data, we employ a partially homomorphic cryptosystem. We propose protocols for two cloud-MPC architectures motivated by the current developments in the Internet of Things: a client-server architecture and a two-server architecture. In the first case, a control input for the system is privately computed by the cloud server, with the assistance of the client. In the second case, the control input is privately computed by two independent, non-colluding servers, with no additional requirements from the client. We prove that the proposed protocols preserve the privacy of the client's data and of the resulting control input. Furthermore, we compute bounds on the errors introduced by encryption. We present numerical simulations for the two architectures and discuss the trade-off between communication, MPC performance and privacy.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

09/18/2019

Non-Interactive Private Decision Tree Evaluation

Decision trees are a powerful prediction model with many applications in...
05/17/2019

Bidirectional Information Flow and the Roles of Privacy Masks in Cloud-Based Control

We consider a cloud-based control architecture for a linear plant with G...
11/13/2020

iHorology: Lowering the Barrier to Microsecond-level Internet Time

High precision, synchronized clocks are essential to a growing number of...
05/13/2022

Impala: Low-Latency, Communication-Efficient Private Deep Learning Inference

This paper proposes Impala, a new cryptographic protocol for private inf...
03/26/2020

Corella: A Private Multi Server Learning Approach based on Correlated Queries

The emerging applications of machine learning algorithms on mobile devic...
01/22/2021

Understanding the Tradeoffs in Client-Side Privacy for Speech Recognition

Existing approaches to ensuring privacy of user speech data primarily fo...
04/24/2022

Hardware Acceleration for Third-Generation FHE and PSI Based on It

With the expansion of cloud services, serious concerns about the privacy...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The increase in the number of connected devices, as well as their reduction in size and resources, determined a growth in the utilization of cloud-based services, in which a centralized powerful server offers on demand storage, processing and delivery capabilities to users. With the development of communication efficient algorithms, outsourcing computations to the cloud becomes very convenient. However, issues regarding the privacy of the shared data arise, as the users have no control over the actions of the cloud, which can leak or abuse the data it receives.

Model Predictive Control (MPC) is a powerful scheme that is successfully deployed in practice [1] for systems of varying dimension and architecture, including cloud platforms. In competitive scenarios, such as energy generation in the power grid, domestic scenarios, such as heating control in smart houses, or time-sensitive scenarios, such as traffic control, the control scheme should come with privacy guarantees to protect from eavesdroppers or from an untrustworthy cloud. For instance, in smart houses, client-server setups can be envisioned, where a local trusted computer aggregates the measurements from the sensors, but does not store their model and specifications and depends on a server to compute the control input or reference. The server can also posses other information, such as the weather. In a heating application, the parameters of the system can be known by the server, i.e., the energy consumption model of the house, but the data measurements and how much the owner wants to consume should be private. In traffic control, the drivers are expected to share their locations, which should remain private, but are not expected to contribute to the computation. Hence, the locations are collected and processed only at a single server’s level, e.g., in a two-server setup, which then sends the result back to the cars or to traffic lights.

Although much effort has been dedicated in this direction, a universally secure scheme that is able to perform locally, at the cloud level, any given functionality on the users’ data has not been developed yet [2]. For a single user and functionalities that can be described by boolean functions, fully homomorphic encryption (FHE) [3, 4] guarantees privacy, but at high storage and complexity requirements [5]. For multiple users, the concept of functional privacy is required, which can be attained by functional encryption [6], which was developed only for limited functionalities. More tractable solutions that involve multiple interactions between the participating parties to ensure the confidentiality of the users’ data have been proposed. In client-server computations, where the users have a trusted machine, called the client, that performs computations of smaller intensity than the server, we mention partially homomorphic encryption (PHE) [7] and differential privacy (DP) [8]. In two-server computations, in which the users share their data to two non-colluding servers, the following solutions are available: secret sharing [9, 10], garbled circuits [11, 12], Goldreich-Micali-Wigderson protocol [13], PHE [14].

I-a Contributions

In this paper, we discuss the implicit MPC computation for a linear system with input constraints, where we privately compute a control input, while maintaining the privacy of the state, using a cryptosystem that is partially homomorphic, i.e., supports additions of encrypted data. In the first case we consider, the control input is privately computed by a server, with the help of the client. In the second case, the computation is performed by two non-colluding servers. The convergence of the state trajectory to the reference is public knowledge, so it is crucial to not reveal anything else about the state and other sensitive quantitities. Therefore, we use a privacy model that stipulates that no computationally efficient algorithm run by the cloud can infer anything about the private data, or, in other words, an adversary doesn’t know more about the private data than a random guess. Although this model is very strict, it thoroughly characterizes the loss of information.

This work explores fundamental issues of privacy in control: the trade-off between computation, communication, performance and privacy. We present two main contributions: proposing two privacy-preserving protocols for MPC and evaluating the errors induced by the encryption.

I-B Related work

In control systems, ensuring the privacy of the measurements and control inputs from eavesdroppers and from the controller has been so far tackled with differential privacy, homomorphic encryption and transformation methods. Kalman filtering with DP was addressed in 

[15], current trajectory hiding in [16], linear distributed control [17], and distributed MPC in [18]. The idea of encrypted controllers was introduced in [19] and [20], using PHE, and in [21] using FHE. Kalman filtering with PHE was further explored in [22]. Optimization problems with DP are addressed in [23, 24] and PHE in [25, 26, 27, 28]. Many works proposed privacy through transformation methods that use multiplicative masking. While the computational efficiency of such methods is desirable, their privacy cannot be rigorously quantified, as required by our privacy model, since the distribution of the masked values is not uniform [29].

Recent work in [30] has tackled the problem of privately computing the input for a constrained linear system using explicit MPC, in a client-server setup. There, the client performs the computationally intensive trajectory localization and sends the result to the server, which then evaluates the corresponding affine control law on the encrypted state using PHE. Although explicit MPC has the advantage of computing the parametric control laws offline, the evaluation of the search tree at the cloud’s level is intractable when the number of nodes is large, since all nodes have to be evaluated in order to not reveal the polyhedra the in which the state lies, and comparison cannot be performed locally on encrypted data. Furthermore, the binary search in explicit MPC is intensive and requires the client to store all the characterization of the polyhedra, which we would like to avoid. Taking this into consideration, we focus on implicit MPC.

The performance degradation of a linear controller due to encryption is analyzed in [31]. In our work, we investigate performance degradation for the nonlinear control obtained from MPC.

Ii Problem setup

We consider a discrete-time linear time-invariant system:

(1)

with the state and the control input . The optimal control receding horizon problem with constraints on the states and inputs can be written as:

(2)

where is the length of the horizon and are cost matrices. For reasons related to error bounding, explained in Section VI, in this paper, we consider input constrained systems: , , and impose stability without a terminal state constraint, i.e. , but with appropriately chosen costs and horizon such that the closed-loop system has robust performance to bounded errors due to encryption, which will be described in Section VI. A survey on the conditions for stability of MPC is given in [32].

Through straightforward manipulations, (2) can be written as a quadratic program (see details on obtaining the matrices and in [33, Ch. 8,11]) in the variable .

(3)

For simplicity, we keep the same notation for the augmented constraint set . After obtaining the optimal solution, the first components of are applied as input to the system (1): . This problem easily extends to the case of following a reference.

Ii-a Solution without privacy requirements

The constraint set is a hyperbox, so the projection step required for solving (3) has a simple closed form solution and the optimization problem can be efficiently solved with the projected Fast Gradient Method (FGM) [34], given in Algorithm II-A. The objective function is strongly convex, since , therefore we can use the constant step sizes and , where is the condition number of . Warm starting can be used at subsequent time steps of the receding horizon problem by using part of the previous solution  to construct a feasible initial iterate of the new optimization problem.

2 

Algorithm 1: Projected Fast Gradient Descent

1:
2:
3:for k=0…,K-1 do
4:     
5:     ,
6:     
7:end for

 

Ii-B Privacy objectives

The unsecure cloud-MPC problem is depicted in Figure 1. The system’s constant parameters are public, motivated by the fact the parameters are intrinsic to the system and hardware, and could be guessed or identified; however, the measurements, control inputs and constraints are not known and should remain private. The goal of this work is to devise private cloud-outsourced versions of Algorithm II-A such that the client obtains the control input for system (1) with only a minimum amount of computation. The cloud (consisting of either one or two servers) should not infer anything else than what was known prior to the computation about the measurements , the control inputs and the constraints . We tolerate semi-honest servers, meaning that they correctly follow the steps of the protocol but may store the transcript of the messages exchanged and process the data received to try to learn more information than allowed.

Fig. 1: Unsecure MPC: the system model , horizon and costs , , are public. The state , control input and input constraints are privacy-sensitive. The red lines represent entities that can be eavesdropped.

To formalize the privacy objectives, we introduce the privacy definitions that we want our protocols to satisfy, described in [35, Ch. 3][36, Ch. 7]. In what follows, defines a sequence of bits of unspecified length. Given a countable index set , an ensemble , indexed by

, is a sequence of random variables

, for all .

Definition 1

The ensembles and are statistically indistinguishable, denoted , if for every positive polynomial and all sufficiently large , the following holds, where :

Two ensembles are called computationally indistinguishable if no efficient algorithm can distinguish between them. This is a weaker version of statistical indistinguishability.

Definition 2

The ensembles and are computationally indistinguishable, denoted , if for every probabilistic polynomial-time algorithm , called the distinguisher, every positive polynomial , and all sufficiently large , the following holds:

The definition of two-party privacy says that a protocol privately computes the functionality it runs if all information obtained by a party after the execution of the protocol, while also keeping a record of the intermediate computations, can be obtained only from the inputs and outputs of that party.

Definition 3

Let be a func-tionality, and be the th component of , . Let be a two-party protocol for computing . The view of the th party during an execution of on the inputs , denoted by , is , where represents the outcome of the th party’s internal coin tosses, and represents the th message it has received. For a deterministic functionality , we say that privately computes if there exist probabilistic polynomial-time algorithms, called simulators, denoted by , such that:

When the privacy is one-sided, i.e., only the part of the protocol executed by party 2 has to not reveal any information, the above equation has to be satisfied only for .

The purpose of the paper is to design protocols with the functionality of Algorithm II-A that satisfy Definition 3. To this end, we use the encryption scheme defined in Section III. Furthermore, we discuss in Section VI how we connect the domain of the inputs in Definiton 3 with the domain of real numbers needed for the MPC problem. In Sections IV and V, we address two private cloud-MPC solutions that present a trade-off between the computational effort at the client and the total time required to compute the solution , which is analyzed in Section VI.

The MPC literature has focused on reducing the computational effort through computing a suboptimal solution to implicit MPC [37, 38]. Such time optimizations, i.e., stopping criteria, reveal information about the private data, such as how far is the initial point from the optimum or the difference between consecutive iterates. Therefore, in this work, we consider a given fixed number of iterations .

Iii Partially homomorphic cryptosystem

Partially homomorphic encryption schemes can support additions between encrypted data, such as Paillier [7] and Goldwasser-Micali [39], DGK [40], or multiplications between encrypted data, such as El Gamal [41] and unpadded RSA [42].

In this paper, we use the Paillier cryptosystem [7], which is an asymmetric additively homomorphic encryption scheme. The message space for the Paillier scheme is , where is a large integer that is the product of two prime numbers . The pair of keys corresponding to this cryptosystem is , where the public key is , with having order and the secret key is :

For a message , called plaintext, the Pailler encryption primitive is defined as:

The encrypted messages are called ciphertexts.

A probabilistic encryption scheme, i.e., that takes random numbers in the encryption primitive, does not preserve the order from the plaintext space to the ciphertext space.

Intuitively, the additively homomorphic property states that there exists an operator defined on the space of encrypted messages, with:

where the equality holds in modular arithmetic, w.r.t. . Formally, the decryption primitive is a homomorphism between the group of ciphertexts, with the operator and the group of plaintexts with addition .

The scheme also supports subtraction between ciphertexts, and multiplication between a plaintext and an encrypted message, obtained by adding the encrypted messages for the corresponding (integer) number of times:

. We will use the same notation to denote encryptions, additions and multiplication by vectors and matrices.

Proving privacy in the semi-honest model of a protocol that makes use of cryptosystems involves the concept of semantic security. Under the assumption of decisional composite residuosity [7], the Paillier cryptosystem is semantically secure and has indistinguishable encryptions, which, in essence, means that an adversary cannot distinguish between the ciphertext and a ciphertext based on the messages and  [36, Ch. 5].

Definition 4

An encryption scheme with encryption primitive is semantically secure if for every probabilistic polynomial-time algorithm, A, there exists a probabilistic polynomial-time algorithm such that for every two polynomially bounded functions

and for any probability ensemble

, , for any positive polynomial and sufficiently large :

Pr
Pr

Cloud-based Linear Quadratic Regulator: We provide a simple example of how the Paillier encryption can be used in a private control application. If the problem is unconstrained, i.e., , a stabilizing controller can be computed as a linear quadratic regulator [33, Ch. 8]. Such a controller can be computed only by one server. The client sends the encrypted state to the server, which recursively computes the plaintext control gain and solution to the Discrete Riccati Equations, with :

The server obtains the encrypted control input and returns it to the client, which decrypts and inputs it to the system.

To summarize, PHE allows a party that does not have the private key to perform linear operations on encrypted integer data. For instance, a cloud-based Linear Quadratic Controller can be computed entirely by one server, because the control action is linear in the state. Nonlinear operations are not supported within this cryptosystem, but can be achieved with communication between the party that has the encrypted data and the party that has the private key, and we will use this in Sections IV and V.

Iv Client-Server architecture

To be able to use the Paillier encryption, we need to represent the messages on a finite set of integers, parametrized by , i.e., each message is an element in . Usually, the values less than are interpreted to be positive, the numbers between and to be negative, and the rest of the range allows for overflow detection. In this section and Section V, we consider a fixed-point representation of the values and perform implicit multiplication steps to obtain integers and division steps to retrieve the true values. We analyze the implications of the fixed-point representation over the MPC solution in Section VI.

Notation: Given a real quantity , we use the notation for the corresponding quantity in fixed-point representation on one sign bit, integer and fractional bits.

We introduce a client-server model, depicted in Figure 2. We present an interactive protocol that privately computes the control input for the client, while maintaining the privacy of the state in Protocol 2. The Paillier encryption is not order preserving, so the projection operation cannot be performed locally by the server. Hence, the server sends the encrypted iterate for the client to project it. Then, the client encrypts the feasible iterate and sends it back to the cloud.

Fig. 2: Private client-server MPC setup for a plant.

We drop the from the iterates in order to not burden the notation.

2 

Protocol 2: Encrypted projected Fast Gradient Descent in a client-server architecture

1:: : , cold-start,
2::
3:: Encrypt and send to
4:if cold-start then
5:     : ; :
6:else
7:     : ; :
8:end if
9::
10:for k=0…,K-1 do
11:     : and send it to
12:     : Decrypt and truncate to fractional bits
13:     : Projection on
14:     : Encrypt and send to
15:     :
16:end for
17:: Decrypt and output

 

Theorem 1

Protocol 2 achieves privacy as in Definition 3 with respect to a semi-honest server.

The initial value of the iterate does not give any information to the server about the result, as the final result is encrypted and the number of iterations is a priori fixed. The view of the server, as in Definition 3, is composed of the server’s inputs, the messages received , which are all encrypted, and no output. We construct a simulator which replaces the messages with random encryptions of corresponding length. Due to the semantic security (see Definition 4) of the Paillier cryptosystem, which was proved in [7], the view of the simulator is computationally indistinguishable from the view of the server.

V Two-server architecture

Although in Protocol 2, the client needs to store and process substantially less data than the server, the computational requirements might be too stringent for large values of and . In such a case, we outsource the problem to two servers, and only require the client to encrypt and send it to one server and decrypt the received the result . In this setup, depicted in Figure 3, the existence of two non-colluding servers is assumed.

In Figure 3 and in Protocol 3, we will denote by a message encrypted with and by a message encrypted by . The reason we use two pairs of keys is so the client and support server do not have the same private key and do not need to interact.

Fig. 3: Private two-server MPC setup for a plant.

As before, we need an interactive protocol to achieve the projection. We use the DGK comparison protocol, proposed in [40, 43], such that, given two encrypted values of bits to , after the protocol, , who has the private key, obtains a bit , without finding anything about the inputs. Moreover, finds nothing about . We augment this protocol by introducing a step before the comparison in which randomizes the order of the two values to be compared, such that does not know the significance of  with respect to the inputs. Furthermore, by performing a blinded exchange, obtains the minimum (respectively, maximum) value of the two inputs, without any of the two servers knowing what the result is. The above procedure is performed in lines 11–16 in Protocol 3. More details can be found in [44].

The comparison protocol works with inputs that are represented on bits. The variables we compare are results of additions and multiplications, which can increase the number of bits, thus, we need to ensure that they are represented on bits before inputting them to the comparison protocol. This introduces an extra step in line 10 in which communicates with in order to obtain the truncation of the comparison inputs: adds noise to and sends it to which decrypts it, truncates the result to bits and sends it back. then subtracts the truncated noise.

In order to guarantee that does not find out the private values after decryption, adds a sufficiently large random noise to the private data. The random numbers in lines 13 and 19 are chosen from , which ensures the statistical indistinguishability between the sum of the random number and the private value and a random number of equivalent length [45], where is the statistical security parameter.

2 

Protocol 3: Encrypted projected Fast Gradient Descent in a two-server architecture

1:
2::
3:: Encrypt and send to
4:if cold-start then
5:     : ; :
6:else
7:     : ; :
8:end if
9::
10:for k=0…,K-1 do
11:     :
12:     : truncate
13:     : randomize
14:     : DGK s.t. obtains
15:     : Pick and send to
16:     : Send back and if or if
17:     :
18:     : Redo 11–15 to get
19:     :
20:end for
21:: Pick and send to
22:: Decrypt, encrypt with and send to :
23:: and send it to
24:: Decrypt and output

 

Theorem 2

Protocol 3 achieves privacy as in Definition 3, as long as the two semi-honest servers do not collude.

The view of is composed by its inputs and exchanged messages, and no output. All the messages the first server receives are encrypted (the same holds for the subprotocol DGK). Furthermore, in line 14, an encryption of zero is added to the quantity receives such that the encryption is re-randomized and cannot recognize it. Due to the semantic security of the cryptosystems, the view of is computationally indistinguishable from the view of a simulator which follows the same steps as , but replaces the incoming messages by random encryptions of corresponding length.

The view of is composed by its inputs and exchanged messages, and no output. Apart from the comparison bits, the latter are always blinded by noise that has at least bits more than the private data being sent. For chosen appropriately large (e.g. 100 bits [45]), the following is true: , where is a value of bits, is the noise chosen uniformly at random from and is a value chosen uniformly at random from . In the DGK subprotocol, a similar blinding is performed, see [46].

Crucially, the noise selected by is different at each iteration. Hence, cannot extract any information by combining messages from multiple iterations, as they are always blinded by a different large enough noise. Moreover, the randomization step in line 11 ensures that cannot infer anything from the values of , as the order of the inputs is unknown. Thus, we construct a simulator that follows the same steps as , but instead of the received messages, it randomly generates values of appropriate length, corresponding to the blinded private values, and random bits corresponding to the comparison bits. The view of such a simulator will be computationally indistinguishable from the view of .

Remark 1

One can expand Protocols 2 and 3 over multiple time steps, such that is obtained from the previous iteration and not given as input, and formally prove their privacy. The fact that the state will converge to a neighborhood of the origin is public knowledge, and is not revealed by the execution of the protocol. A more detailed proof that explicitly constructs the simulators can be found in [44].

Through communication, encryption and statistical blinding, the two servers can privately compute nonlinear operations. However, this causes an increase in the computation time due to the extra encryptions and decryptions and communication rounds, as will be pointed out in Section VII.

Vi Fixed-point arithmetics MPC

The values that are encrypted or added to or multiplied with encrypted values have to be integers. We consider fixed-point representations with one sign bit, integer bits and fractional bits and multiply them by to obtain integers.

Working with fixed-point representations can lead to overflow, quantization and arithmetic round-off errors. Thus, we want to compute the deviation between the fixed-point solution and optimal solution of Algorithm II-A. The bounds on this deviation can be used in an offline step prior to the online computation to choose an appropriate fixed-point precision for the performance of the system.

Consider the following assumption:

Assumption 1

The number of fractional bits and constant are chosen large enough such that:

  1. : the fixed-point precision solution is still feasible.

  2. The eigenvalues of the

    fixed-point representation are contained in the set , where

    The constant is required in order to overcome the possibility that has the maximum eigenvalue larger than 1 due to fixed-point arithmetic errors.

  3. The fixed-point representation of the step size satisfies:

Item (i) ensures that the feasibility of the fixed-point precision solution is preserved, item (ii) ensures that the strong convexity of the fixed-point objective function still holds and item (iii) ensures that the fixed-point step size is such that the FGM converges.

Overflow errors: Bounds on the infinity-norm on the fixed-point dynamic quantities of interest in Algorithm II-A were derived in [47] for each iteration , and depend on a bounded set such that and :

where and represents the radius of a set w.r.t. the infinity norm. We select from these bounds the number of integer bits such that there is no overflow.

Vi-a Difference between real and fixed-point solution

We want to determine a bound on the error between the fixed-point precision solution and the real solution of the MPC problem (3). The total error is composed of the error induced by having fixed-point coefficients and variables in the optimization problem, and by the round-off errors. Specifically, denote by the solution in exact arithmetic of the MPC problem (3) obtained after iterations of Algorithm II-A. Furthermore, denote by the solution obtained after iterations but with replaced by their fixed-point representations. Finally, denote by the solution of Protocols 2 and 3 after iterations, where the iterates have fixed-point representation, i.e, truncations are performed. We obtain the following upper bound on the difference between the solution obtained on the encrypted data and the nominal solution of the implicit MPC problem (3) after iterations:

Vi-A1 Quantization errors

We will use the following observation to investigate the quantization error bounds. Define and . Then:

Consider problem (3) where the coefficients are replaced by the fixed-point representations of the matrices , of the vector and of the set , but otherwise the iterates are real values. Now, consider iteration of the projected FGM. The errors induced by quantization of the coefficients between the original iterates and the approximation iterates will be:

(4)

where we used the notation: ; ; ; ; ; .

The error between and is reduced from due to the projection on the hyperbox. Hence, to represent in (4), we multiply by the diagonal matrix that has positive elements at most one.

We set . From (4), we derive a recursive iteration that characterizes the error of the primal iterate, for , which we can write as a linear system:

(5)

We choose this representation in order to have a relevant error bound in Theorem 3 that shrinks to zero as the number of fractional bits grows. In the following, we find an upper bound of the error using and .

Theorem 3

Under Assumption 1, the system defined by (5) is bounded. Furthermore, the norm of the error between the primal iterates of the original problem and of the problem with quantized coefficients is bounded by:

where , is the radius of the compact set w.r.t. the 2-norm and .

The inner stability of the system is given by the fact that has spectral radius which is proven in Lemma 1 in [47]. The same holds for . Since we want to give a bound of the error in terms of computable values, we use the fact that (resp., ) and express the bounds in terms of the latter.

From (5), one can obtain the following expression for the errors at time and , for :

and the first term goes to zero as . We multiply this by to obtain the expression of .

Subsequently, for any :

One can eliminate the initial error and its effect by choosing in both exact and fixed-point coefficient-FGM algorithms the initial iterate to be represented on fractional bits. Therefore, only the persistent noise counts.

Remark 2

In primal-dual algorithms, the maximum values of the dual variables corresponding to the complicating constraints cannot be bounded a priori, i.e., we cannot give overflow or coefficient quantization error bounds. This justifies our focus on a problem with only simple input constraints. The work in [48] considers the bound on the dual variables as a parameter that can be tuned by the user.

Vi-A2 Arithmetic round-off errors

Let us now investigate the error between the solution of the previous problem and the solution of the fixed-point FGM corresponding to Protocols 2 and 3. The encrypted values do not necessarily maintain the same number of bits after operations, so we will consider round-off errors where we perform truncations. This happens in line 10 in both protocols. In this case, we obtain similar results to [47], where the quantization errors were not analyzed, i.e., as if the nominal coefficients of the problem were represented with fractional bits from the problem formulation. Consider iteration of the projected FGM. The errors due to round-off between the primal iterates of the two solutions will be:

(6)

Again, the projection on the hyperbox reduces the error, so is a diagonal matrix with positive elements less than one. For Protocol 2, the round-off error due to truncation is , . The encrypted truncation step in Protocol 3 introduces an extra term due to blinding, making .

We set set . From (6), we can derive a recursive iteration that characterizes the error of the primal iterate, which we can write as a linear system, with as before:

(7)
Theorem 4

Under Assumption 1, the system defined by (7) is bounded. Furthermore, the norm of the error of the primal iterate is bounded by: