Private and Secure Distributed Matrix Multiplication with Flexible Communication Load

09/01/2019 ∙ by Malihe Aliasgari, et al. ∙ New Jersey Institute of Technology King's College London 0

Large matrix multiplications are central to large-scale machine learning applications. These operations are often carried out on a distributed computing platform with a master server and multiple workers in the cloud operating in parallel. For such distributed platforms, it has been recently shown that coding over the input data matrices can reduce the computational delay, yielding a trade-off between recovery threshold, i.e., the number of workers required to recover the matrix product, and communication load, i.e., the total amount of data to be downloaded from the workers. In this paper, in addition to exact recovery requirements, we impose security and privacy constraints on the data matrices, and study the recovery threshold as a function of the communication load. We first assume that both matrices contain private information and that workers can collude to eavesdrop on the content of these data matrices. For this problem, we introduce a novel class of secure codes, referred to as secure generalized PolyDot (SGPD) codes, that generalize state-of-the-art non-secure codes for matrix multiplication. SGPD codes allow a flexible trade-off between recovery threshold and communication load for a fixed maximum number of colluding workers while providing perfect secrecy for the two data matrices. We then assume that one of the data matrices is taken from a public set known to all the workers. In this setup, the identity of the matrix of interest should be kept private from the workers. For this model, we present a variant of generalized PolyDot codes that can guarantee both secrecy of one matrix and privacy for the identity of the other matrix for the case of no colluding servers.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

I-a Motivation and Problem Definition

At the core of many signal processing and machine learning applications are tensor operations, most notably large matrix multiplications

[2]. In the presence of practically sized data sets, such operations are typically carried out using distributed computing platforms with a master server and multiple workers that can operate in parallel over distinct parts of the data set. The master server plays the role of the parameter server, distributing data to the workers and periodically reconciling their internal state [3]. Workers are commercial off-the-shelf servers that are characterized by possible temporary failures and delays [4].

Straggling workers can affect the computation latency by orders of magnitude, e.g., [5, 6]. While current distributed computing platforms conventionally handle straggling servers by means of replication of computing tasks [7], recent work has shown that encoding the input data can help reduce the computation latency. More generally, coding is able to control the trade-off between computational delay and communication load between workers and master server [8, 9, 10, 11, 12, 13, 14, 15, 16, 17]. Furthermore, stochastic coding can help keeping both input and output data secure from the workers, assuming that the latter are honest, i.e., carrying out the prescribed protocol, but curious [18, 19, 20, 21, 22, 23, 24, 25]. This paper contributes to this line of work by investigating the trade-off between computational delay and communication load as a function of the privacy level.

As illustrated in Figs. 1 and 2, we focus on the basic problem of computing a matrix multiplication in a distributed computing system of workers that can process each only a fraction and of matrices and , respectively. In the first setup under study, illustrated in Fig. 1, both matrices and are to be kept private from the workers. Here, three performance criteria are of interest:

  • the recovery threshold , that is, the number of workers that need to complete their task before the master server can recover the product ;

  • the communication load between workers and master server, i.e., the amount of information to be downloaded from the workers;

  • the maximum number of colluding servers that ensures perfect secrecy for both data matrices and .

In the second setup of interest shown in Fig. 2, only matrix is private, while matrix is selected from a public data set . In this case, apart from the security constraint on , we only impose a privacy constraint on the identity of the specific matrix of interest. The criteria of interest are still and , and we simplify the problem by setting . This paper focuses on the design of coding and computing techniques for both problems.

I-B Related Work

In order to put our contribution in perspective, we briefly review prior related work. Consider first solutions that provide no security guarantees, i.e., , for the problem in Fig. 1. As a direct extension of [8], a first approach is to use product codes that apply separately the maximum distance separable (MDS) codes to encode the two matrices [26]. The recovery threshold of this scheme is improved by [9] which introduces polynomial codes. The construction in [9] is proved to be optimal under the assumption that minimal communication is allowed between workers and master server. In [15], MatDot codes are introduced, resulting in a lower recovery threshold at the expense of a larger communication load. The construction in [13] bridges the gap between polynomial and MatDot codes and presents PolyDot codes, yielding a trade-off between recovery threshold and communication load. An extension of this scheme, termed Generalized PolyDot (GPD) codes improves on the recovery threshold of PolyDot codes [14], which is independently obtained also by the construction in [27].

Much less work has been done in the literature for the case in which security constraints are factored in, i.e., where , for the problem of Fig. 1. In [19], Lagrange coding is presented that achieves the minimum recovery threshold for multilinear functions by generalizing MatDot codes. In [18, 25], coded schemes have been used to develop multi-party computation techniques to calculate arbitrary polynomials of massive matrices, preserving security of the data matrices. In [20, 21, 23] a reduction of the communication load is obtained by extending polynomial codes. While these works focus on either minimizing recovery threshold or communication load, the trade-off between these two fundamental quantities has not been addressed in the open literature to the best of our knowledge. A new class of secure distributed matrix multiplication and its capacity is studied in [28].

The problem in Fig. 2 is related to private information retrieval (PIR), introduced in [29] and widely studied in recent years, e.g., [30, 31, 32, 33, 34, 35, 36, 37, 38, 39]. In the PIR problem, the master server wishes to retrieve a message within some library from a set of distributed databases, each of which stores all the messages. This should be done without revealing any information about which message is being retrieved to any individual workers, hence ensuring the privacy of the index of the selected message. In [37] and [38] the PIR setup was studied for the problem of distributed matrix multiplication illustrated in Fig. 2 that imposes PIR guarantees for the index of matrix within a public library. In [37], a coding strategy is proposed that combines the PIR scheme for non-colluding servers (i.e., with ) [29] with polynomial codes [9]. In [38], the authors introduce a related approach for this problem, and show that it outperforms the scheme proposed in [37] in terms of upload and download cost. The code design in [38] focuses on the minimization of the communication load, and does not explore the trade-off between this metric and the recovery threshold.

I-C Main Contribution

In this paper we first present a novel class of secure computation codes, referred to as secure GPD (SGPD) codes, for the setup in Fig. 1, SGPD codes generalize GPD codes to operate at a flexible communication load level. This yields a new achievable trade-off between recovery threshold and communication load as a function of a prescribed number of colluding workers . In the process, we also introduce a novel perspective on distributed computing codes based on the signal processing concepts of convolution and -transform. SGPD codes were first introduced in the conference version of this paper [1]. Then, SGPD codes are modified to offer a novel solution for the scenario in Fig. 2. This is done through concatenation with the PIR code in [37], which ensures both secrecy of the input matrix and privacy of the identity for the desired matrix in the library if . The resulting codes are referred to as private and secure GPD (PSGPD) codes. They generalize the approach in [38], enabling a trade-off between (upload) communication load and recovery threshold.

I-D Organization

The rest of the paper is organized as follows. In Section II, we present the system models for secure matrix multiplication (Fig. 1) in II-C and for private and secure matrix multiplication (Fig. 2) in Section II-D, respectively. In Section III we propose an intuitive interpretation of the GPD code introduced in [15]. Using -transforms, Section IV proposes a novel extension of GPD code by imposing a security constraint on the data matrices and deriving the resulting trade-off between recovery threshold and communication load . Likewise, in Section V we address the setup in Fig. 2, again with respect to the trade-off between and . The paper is concluded in Section VI.

Ii Problem Statement

Ii-a Notation

Throughout the paper, we denote a matrix with upper boldface letters (e.g.,

), and lower boldface letters indicate a vector or a sequence of matrices (e.g.,

). Furthermore, a math calligraphic font refers to a set (e.g., . A set represents the Galois field with cardinality . We denote by the set of all non-zero positive integers, and for some , , . For any real number , represents the largest integer nearest to . he function represents the entropy of its argument, and

denotes the mutual information of the random variables

and .

Fig. 1: Secure matrix multiplication: the master server encodes both input matrices and , to be kept secure from the workers, and both random matrices and , respectively, to define the computational tasks of the slave servers or workers. The workers may fail or straggle, and they are honest but curious, with colluding subsets of workers of size at most . The master server must be able to decode the product from the output of a subset of servers, which defines the recovery threshold.

Ii-B System Model

As illustrated in Figs. 1 and 2, we consider a distributed computing system with a master server and slave servers or workers. The master server is interested in computing securely the matrix product of two data matrices and with dimensions and

, respectively. The matrices have i.i.d. uniformly distributed entries from a sufficient large finite field

, with . More precisely, we will consider two scenarios. In the first, both matrices and are available at the master server and contain confidential data that should be kept secure from the workers (see Fig. 1). In the second, only matrix contains confidential information, and there are public matrices in the set from which the master node wishes to compute the product for some th index . The index must be kept private against the workers (see Fig. 2). In the following, we first describe the system model for the setup in Fig. 1, referred to as secure matrix multiplication, followed by the setup for the model in Fig. 2, referred to as private and secure matrix multiplication.

Ii-C Secure Matrix Multiplication

For the scenario in Fig. 1 workers receive information on matrices and from the master server; they process this information and they respond to the master server, which finally recovers the product with minimal computational effort. Due to communication and complexity constraints, each worker can receive only and symbols, respectively, for some integers and . The workers are honest but curious. Accordingly, we impose the secrecy constraint that, even if up to workers collude, the workers cannot obtain any information about both matrices and based on the data received from the master server.

To keep the data secure and to leverage possible computational redundancy at the workers (namely, if and/or ), the master server sends encoded versions of the input matrices to the workers due to the above mentioned communication and complexity constraints. Specifically, it produces the encoded matrices , where

is a random matrix of dimension

, for some integers and to be defined below, via the function

(1)

for some integers and such that . The resulting entries in the output of function are then sent to worker , with . Likewise, the master server computes the encoded matrices , where is a random matrix of dimension , for some integers and to be defined below, using the function

(2)

for some integers and such that . The resulting entries in are then sent to worker . The random matrices and consists of i.i.d. uniformly distributed entries from a field . The security constraint imposes the condition

(3)

for all subsets of of workers, where the random matrices and serve as random keys in order to meet the security constraint (3) [40].

Each worker computes the product of the encoded sub-matrices and . The master server collects a subset of outputs from the workers as defined by the subset with . It then applies a decoding function as ,

(4)

Note that correct decoding translates into the condition

(5)

A coding and decoding strategy that satisfies condition (3) and (5) is said to be feasible.

For given parameters and the performance of a coding and decoding scheme is measured by the triple , where is defined as

(6)

is the dimension of the product matrix computed by worker . Note that condition (5) requires the inequality or , which is hence a lower bound for the minimum recovery threshold. Furthermore, the communication load is lower bounded by , which is the size of the product .

Fig. 2: Private and secure matrix multiplication: the master server encodes the input matrix , to be kept secret from the workers, and generates the encoded matrix for each worker . It also sends a query as a function of the index , to be kept private from workers, of the desired product , with matrices available at all workers. The non-colluding workers may fail or straggle, and they are honest but curious. The master server must be able to decode the product from the output of a subset of servers, which defines the recovery threshold.

Ii-D Private and Secure Matrix Multiplication

In this subsection, we discuss the private and secure matrix multiplication problem illustrated in Fig. 2. In this setup, the master server wishes to compute the product of a confidential input matrix with a matrix from a set of public matrices , while keeping the index of the matrix of interest private from the workers.

Similar to the secure model in Fig. 1, we consider a distributed computing system with a master server and honest but curious workers. The master server contains a confidential data matrix with dimension . Each worker has access to the library , which consists of distinct matrices , each with dimension . As above, all matrices contain data symbols chosen uniformly i.i.d. from a sufficient large finite field , with . The master server is interested in computing the matrix product of the data matrix and of a matrix for some index . This should be done while keeping the data matrix secret against the workers in the same sense as in the scenario of Fig. 1, while also ensuring that the index is kept secret from the workers.

To do so, as in the PIR problem [32, 33], the master server generates query vectors , for some as a function of the desired index and sends each worker , the query vector . We assume that the workers do not collude, i.e., we set . Extensions to any are possible and are left for future work. We note that, when the input matrix

is an identity matrix, the setup reduces to the PIR problem.

To keep the data matrix secure against workers, the master server sends each worker an encoded version which is a possibly random function of index , and, through it, of the query , of the data matrix and of a random matrix , for some integers and such that .

Upon receiving , each worker uses the query to derive an matrix from the library by using an encoding function

(7)

for some integers and such that . We emphasize that, unlike the setup considered in Fig. 1, the content of the desired matrix is not secure against workers, since the library is public. Each worker then computes the product and sends it to the master server. The master server collects a subset of outputs from the workers with . It then applies a decoding function , as in (4), in order to retrieve the desired product .

To guarantee secrecy of input matrix , in a manner similar to (3), we have the constraint

(8)

for all . Following the PIR formulation on [37], in order to ensure the privacy of index , for some value of , the information available at each worker should be statistically indistinguishable from that available for any other value . Mathematically, for all with and for all workers , we have the condition

(9)

that is, the joint distribution of variables

should be the same for any pair of index values . Finally, the correct decoding requirement is defined as in (5), that is

(10)

A coding and decoding strategy that satisfies conditions (8), (9), and (10) is said to be feasible. For given parameters and the performance is measured by the pair , with , where is the communication load defined in (6).

Iii Background: Generalized PolyDot Code without Security Constraint

In this section, we consider the system model shown in Fig. 1 and review the GPD construction first proposed in [15] and later improved in [27, 14] for the special case of no secrecy constrains, i.e., . In the process, we propose a novel intuitive interpretation of GPD encoding and decoding based on the distributed computation of samples from convolutions via -transforms.

We start by recalling that the GPD coding scheme achieves the best currently known trade-off between recovery threshold and communication load for , i.e., under no security constraint. The entangled polynomial codes of [27] have the same properties in terms of . The GPD codes for also achieve the optimal recovery threshold among all linear coding strategies in the cases of or , also they minimize the recovery threshold for the minimum communication load [9, 27].

The GPD code splits the data matrices and both horizontally and vertically as

(11)

The parameters , and can be set arbitrarily under the constraints and . Note that polynomial codes set , while MatDot codes have [13]. All sub-matrices and have dimensions and , respectively.

Fig. 3: Construction of the time sequences and used to define the generalized PolyDot (GPD) code. The zero dashed lines in indicates all-zero block sequences. Each solid arrows in and shows a distinct row of and a column of , respectively.

The GPD code computes each block of the product , namely , for and

, in a distributed fashion. This is done by means of polynomial encoding and polynomial interpolation. As we review next, the computation of block

can be interpreted as the evaluation of the middle sample of the convolution between the block sequences and . In fact, the th sample of the block sequence equals , i.e., . The computation is carried out distributively in the frequency domain by using -transforms with different workers being assigned distinct samples in the frequency domain.

To elaborate, define the block sequence obtained by concatenating the block sequences as . Pictorially, a sequence is obtained from the matrix by reading the blocks in the left-to-right top-to-bottom order, as seen in Fig. 3. We also introduce the longer time block sequence as

(12)

with being a block sequence of all-zero block matrices with dimensions . The sequence can be obtained from the matrix by following the bottom-to-top left-to-right order shown in Fig. 3 and by adding the all-zero block sequences between any two columns of the matrix .

In the frequency domain, the -transforms of sequences and are obtained as

(13)
(14)

respectively. The master server evaluates the polynomials and in non-zero distinct points and sends the corresponding linearly encoded matrices and to server . The encoding functions are hence given by the polynomial evaluations (13) and (14), for . Server computes the multiplication and sends it to the master server. The master server computes the inverse -transform for the received products , obtaining the convolution .

From the convolution we can see that the master server is able to compute all the desired blocks by reading the middle samples of the convolutions from samples of the sequence in the order . Note that, in particular, the zero block subsequences added to sequence ensure that no interference from the other convolutions, affects the middle (th) sample of a convolution with and .

To carry out the inverse transform, the master server needs to collect as many values as there are samples of the sequence , yielding the recovery threshold

(15)

Equivalently, in terms of the underlying polynomial interpretation, the master server needs to collect a number of evaluations of the polynomial equal to the degree of plus one. This computation is of complexity order [13]. Furthermore, the communication load is given as

(16)

where is the size of each matrix .

Iv Secure PolyDot Code

In this section, we propose a novel extension of the GPD code that is able to ensure the secrecy constraint for any . We also derive the corresponding achievable set of triples . As we will discuss, the projection of this set onto the plane defined by the condition includes the set of pairs in (15) and (16) obtained by the GPD code [14]. The proposed secure GPD (SGPD) code augments matrices and by adding random block matrices to the input matrices and , in a manner similar to prior works [18, 19, 20, 21, 23], yielding augmented matrices and . As we will see, a direct application of the GPD codes to these matrices is suboptimal.

In contrast, we propose a novel way to construct sequences and from matrices and that enables the definition of a more efficient code by means of the -transform approach discussed in the previous section. To this end, we follow the design criterion of decreasing the recovery threshold for a given communication load . Based on the discussion in the previous section, this goal can be realized by decreasing the length of the sequence , which can in turn be ensured by reducing the length of the sequence for a given length of the sequence . We accomplish this objective by adaptively appending rows or columns with random elements to matrix , and, correspondingly columns or rows to , which can reduce the recovery threshold; and

modifying the zero padding procedure (see Fig. 

3) for the construction of sequence . In order to account for point (i), we consider separately the two cases and .

Fig. 4: Construction of the time block sequences and in (20) and (21) used to define the SGPD code for the case . The zero dashed lines in and indicate all-zero block sequences.

Iv-a Secure Generalized PolyDot Code: The Case

As illustrated in Fig. 4, when , we augment the input matrices and by adding

(17)

random row and column blocks to matrices and , respectively. Accordingly, the augmented block matrix with is obtained as

(18)

while the augmented matrix with is obtained as

(19)

In (18) and (19), if divides , all block matrices and are generated with i.i.d. uniform random elements in . Otherwise, if , the last matrices in (18), with right-to-left ordering in the last row of , and in (19) with top-to-bottom ordering in the last column of , are all-zero block matrices.

As illustrated in Fig. 4, in the SGPD scheme, the block sequence is defined in the same way as in the conventional GPD, yielding

(20)

where is the th row of the block matrix , . We also define the time block sequence as

(21)

where is block sequences of all-zero block matrices, respectively, with dimensions , while is the th column of the random matrix . The key novel idea of this construction is that no zero matrices are introduced between the columns of matrix . As shown in Theorem 1 below, this construction allows the master server to recover all the desired submatrices for and from the middle samples of the convolutions (see Fig. 5 for an illustration).

Theorem 1.

For a given security level , the proposed SGPD code achieves the recovery threshold

(22)

and the communication load (16), where and for any integer values , and such that , , and .

Proof.

The -transform of sequences and are given respectively as

(23)
(24)

The master server evaluates and at non-zero distinct points , which define the encoding functions, and sends both matrices and to worker . Worker performs the multiplication , and sends the results back to the master server. To reconstruct all blocks of matrix , the master server carries out a polynomial interpolation, or equivalently, it computes the inverse -transform, upon receiving a number of multiplication results equal to at least the length of the sequence . As we detail next, the block , for all and , of matrix can be seen equal to the th sample of the convolution . An illustration can be found in Fig. 5.

To see this, we first note that, by the properties of GPD codes, matrix is the coefficient of the monomial in . Note that this holds since the polynomial and are defined as GPD codes. We now need to show that no other contribution to this term arises from the products , , and . The terms in the product have exponents , for and , which do not include the desired values for and . A similar discussion applies to the product , whose exponents are , for and , and , whose exponents are , for and .

In order to recover the convolution , the master server needs to collect a number of values of the product equal to the length of the sequence , which can be computed as the degree , where is

(25)

For this implies the recovery threshold in (22). The communication load in (16) follows from the fact that there are entries in , for all .

The security constraint (3) can be proved in a manner similar to [20] by the following steps:

(26)

where follows from the definition of encoding functions, since is a deterministic function of and , and is a deterministic function of and , respectively, for all ; follows from (23) and (24), since from polynomial evaluations and in (23) and (24) we can recover unknowns when the coefficients and are known, given that we have ; and follows since and are independent uniformly distributed entries; follows by upper bounding the joint entropy using the sum of individual entropies; and follows from an argument similar to . Hence, the proposed scheme is information-theoretically secure. ∎

Fig. 5: Outcome of the communication for , and . Dashed blue stems with filled markers represent the convolution . Individual convolutions are shown in different colors with square markers. Contributions from one or both random matrices are shown as red crosses. The desired submatrices are seen to equal the corresponding samples from the sequence , associated with the center points of the individual convolutions.
Remark 1.

When a direct application of the GPD construction in Fig. 3 would yield the larger recovery threshold

(27)
Fig. 6: Construction of the time block sequences and in (31) and (32) used to define the secure generalized PolyDot (SGPD) code for the case . The solid line and the zero dashed lines in indicate columns of and all-zero block sequences, respectively.

Iv-B Secure Generalized PolyDot Code: The Case

As illustrated in Fig. 6, when , we instead augment input matrices and by adding