A Secretive Coded Caching for Shared Cache Systems using PDAs

10/21/2021
by   Elizabath Peter, et al.
indian institute of science
0

This paper considers the secretive coded caching problem with shared caches in which no user must have access to the files that it did not demand. In a shared cache network, the users are served by a smaller number of helper caches and each user is connected to exactly one helper cache. To ensure the secrecy constraint in shared cache networks, each user is required to have an individual cache of at least unit file size. For this setting, a secretive coded caching scheme was proposed recently in the literature (Secretive Coded Caching with Shared Caches, in IEEE Communications Letters, 2021), and it requires a subpacketization level which is in the exponential order of the number of helper caches. By utilizing the PDA constructions, we propose a procedure to obtain new secretive coded caching schemes for shared caches with reduced subpacketization levels. We also show that the existing secretive coded caching scheme for shared caches can be recovered using our procedure. Furthermore, we derive a lower bound on the secretive transmission rate using cut-set arguments and demonstrate the order-optimality of the proposed secretive coded caching scheme.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

07/01/2021

Secretive Coded Caching with Shared Caches

We consider the problem of secretive coded caching in a shared cache set...
07/01/2021

Coded Caching with Shared Caches from Generalized Placement Delivery Arrays

We consider the coded caching problem with shared caches where several u...
08/22/2021

Coded Caching for Combination Networks with Multiaccess

In a traditional (H, r) combination network, each user is connected to a...
05/18/2019

A caching system with object sharing

We consider a public content caching system that is shared by a number o...
11/29/2019

On the Effective Throughput of Coded Caching: A Game Theoretic Perspective

Recently coded caching has emerged as a promising means to handle contin...
03/04/2020

Coded Caching with Uneven Channels: A Quality of Experience Approach

The rate performance of wireless coded caching schemes is typically limi...
05/28/2020

Multi-access Coded Caching Schemes From Cross Resolvable Designs

We present a novel caching and coded delivery scheme for a multi-access ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The advent of smart devices accompanied by the rise of on-demand streaming services and content-based applications has led to a dramatic increase in wireless data traffic over the last two decades. Coded caching has been proposed as a promising technique to reduce the traffic congestion experienced during peak hours by exploiting the memory units distributed across the network. The idea of coded caching was first introduced in the work of Maddah-Ali and Niesen [1], which emphasized the benefits and need of the joint design of storage and delivery policies in content delivery networks. In [1], the setting considered is that of a single server having access to a library of equal length files and connected to users through an error-free shared link. Each user is equipped with a dedicated cache of size files. The caches are populated with portions of file contents during off-peak times without knowing the future demands of the users, and this is called the placement phase. The delivery phase happens at peak times, during which the users inform their demands to the server, and the server aims to satisfy the users’ demands with minimum transmission load over the shared link. Each user is able to recover its demanded files using the received messages and the cache contents. The sum of each transmitted message’s length normalized with respect to the file length is defined as the rate of the coded caching scheme. The objective of any coded caching problem is to jointly design the placement and delivery phases such that the rate required to satisfy the users’ demands is minimum. The coded caching scheme in [1], which is referred to as the MN scheme henceforth, was shown to be optimal under the constraint of uncoded placement when in [2], [3]. In [2], the MN scheme was modified to obtain another scheme that was optimal for the case as well. The coded caching approach has been extended to a variety of settings that include decentralized caching [4], multi-access network [5], shared cache network [6] and many more. In [7], for the same setting considered in [1], an additional constraint was incorporated, which ensures that no user can obtain any information about the database files other than its demanded file either from the cache contents or the server transmissions. This setup was referred to as private or secretive coded caching in the literature [7], [8]. We resort to the latter terminology in this work. In [7], an achievable secretive coded caching scheme was proposed for both centralized and decentralized settings. The scheme in [7] also guarantees secure delivery against external eavesdroppers. The secure delivery condition was addressed separately in [9]. The secretive coded caching problem was then extended to other settings that include shared cache networks [10] device-to-device networks [11], combination networks [12] and considering collusion among users in dedicated cache network [13]. We consider the problem of secretive coded caching with shared caches introduced in [10]. The shared cache network introduced in [6] consists of a server with equal length files and is connected to users with the assistance of helper caches as shown in Fig. 1. Each user has access to only one cache, and each cache can serve an arbitrary number of users. But to ensure the secrecy condition in the shared cache network, each user is required to have a dedicated cache of size at least one unit of file, and this is the setting considered in [10]. The centralized secretive coded caching scheme presented in [7] uses the idea of secret sharing [14],[15] and is derived from the MN scheme. Hence, the scheme requires a subpacketization level-the number of smaller parts to which a file is split into- which is in the exponential order of . Therefore, the scheme requires splitting of finite length files into an exponential number of packets resulting in a scenario where the overhead bits involved in each transmission outnumber the data bits present in it, and this limits the practical applicability of the scheme. In [16], Yan et al. showed that combinatorial structures called Placement Delivery Arrays (PDAs) can be used to design coded caching schemes for dedicated cache networks having low subpacketization levels. For dedicated cache setup, secretive coded caching schemes with reduced subpacketization levels were obtained from PDAs in [17]. The exponentially growing subpacketization level with respect to the number of caches is pervasive in shared cache systems as well and is evident from the secretive coded caching scheme proposed for shared cache networks in [10]. In [10], the subpacketization level required is in the exponential order of . Even for moderate network sizes, the subpacketization level required by the scheme in [10] turns out to be high. Since shared cache networks better capture more realistic settings and maintaining the confidentiality of the data is essential in several applications such as paid subscription services, it is necessary to look for practically realizable secretive coded caching schemes for shared caches. In [18], PDAs were used to derive non-secretive coded caching schemes for shared cache networks having low subpacketization levels than the optimal scheme in [6]. In this work, we identify new secretive coded caching schemes for shared caches using PDAs having lower subpacketization requirements than the scheme in [10]. Further, we characterize the performance of our scheme by deriving a cut-set based lower bound for the shared cache setting after incorporating the secrecy condition.

I-a Contributions

In this work, we study the secretive coded caching with shared cache networks. Our contributions are summarized below:

  • A procedure is proposed to obtain new secretive coded caching schemes for shared caches using PDAs. The advantage of our procedure is that it results in schemes with lesser subpacketization level than the corresponding scheme in [10] (Section V).

  • A lower bound on the optimal rate-memory trade-off of secretive coded caching with shared caches is derived using cut-set based arguments and characterizes the multiplicative gap between the achievable rate and the lower bound (Section VI).

  • We also show that the secretive coded caching scheme in [10] can be recovered using a PDA. Hence, our procedure subsumes the scheme in [10] as a special case (Section V).

The rest of the paper is organized as follows. In Section II, we briefly discuss some of the topics that are relevant for our scheme description. We describe the problem setup and present the main results in Sections III and IV, respectively. In Section V, we describe the proposed scheme and in Section VI, the lower bound and the order optimality of the scheme are presented. Section VII summarizes our results. Notations: For a positive integer , denotes the set . For any set , denotes the cardinality of . Binomial coefficients are denoted by , where and is zero for

. Bold uppercase and lowercase letters denote matrices and vectors, respectively. For a vector

, denotes the vector consisting the elements in at positions specified by the elements in the set . The columns of an matrix is denoted by ,

. An identity matrix of size

is denoted as . The finite field with elements is denoted by .

Fig. 1: Problem setting for a shared cache network under secrecy constraint.

Ii Preliminaries

In this section, we briefly review PDAs and secret sharing schemes which are required for describing our scheme.

Ii-a Placement Delivery Array (PDA)

Definition 1.

([16]) For positive integers and , an array , and , composed of a specific symbol and positive integers , is called a placement delivery array (PDA) if it satisfies the following three conditions:
C1. The symbol appears times in each column.
C2. Each integer occurs at least once in the array.
C3. For any two distinct entries and , is an integer only if

  1. [label=()]

  2. , , i.e., they lie in distinct rows and distinct columns, and

  3. , i.e., the corresponding sub-array formed by rows and columns must be of the following form:

       or   

Every PDA corresponds to a coded caching scheme for dedicated cache network with parameters and as in Lemma 1.

Lemma 1.

([16]) For a given PDA , a coded caching scheme can be obtained with subpacketization level and using Algorithm 1. For any distinct demand vector , the demands of all the users are met with a rate, .

1:procedure Placement()
2:     Split each file , into packets:
3:     for  do
4:          :
5:     end for
6:end procedure
7:procedure Delivery()
8:     for  do
9:         Server sends
10:     end for
11:end procedure
Algorithm 1 Coded caching scheme based on PDA [16]

In a PDA , the rows represent packets and the columns represent users. For any if , then it implies that the user has access to the packet of all the files. The contents placed in the user’s cache is denoted by in Algorithm 1. If is an integer, then it means that the user does not have access to the packet of any of the files. Condition guarantees that all users have access to some packets of all the files. According to the delivery procedure in Algorithm , the server sends a linear combination of the requested packets indicated by the integer in the PDA. Therefore, condition implies that the number of messages transmitted by the server is exactly , and the rate achieved is . Condition ensures the decodability.

Ii-B Secret Sharing Schemes

The secretive coded caching schemes proposed so far in the literature rely on non-perfect secret sharing schemes. We also utilize the same in our scheme. The primary idea behind non-perfect sharing scheme is to encode the secret in such a way that accessing a subset of shares does not reveal any information about the secret and only accessing all the shares enable to recover the secret completely. The formal definition of the non-perfect secret sharing scheme is given below.

Definition 2.

([14]) For a secret with size bits and , an non-perfect secret sharing scheme generates equal-sized shares such that accessing any shares does not reveal any information about the secret and can be completely reconstructed from all the shares. i.e,

(1a)
(1b)

In the non-perfect secret sharing scheme, the size of each share should be at least bits [7]. For large enough , there exists non-perfect secret sharing schemes with size of each share being equal to bits.

Iii Problem Setup

We consider a shared cache network as illustrated in Fig 1. There is a central server with a library of independent files , each of size

bits and is uniformly distributed over

. The server is connected to users through an error-free broadcast link, and there are helper caches, each of size equal to files. Each user gets connected to one helper cache and there is no limit on the number of users served by each helper cache. Further, each user has a dedicated cache of size files. The network operates in four phases as in [10]:

  1. [label=()]

  2. Helper Cache Placement phase: Let the contents stored in the helper cache be denoted as , where . The server fills each of the helper caches with functions of the library files and some randomness of appropriate size, such that

    (2)

    Equation (2) implies that no user is able to retrieve any information regarding any of the files from the cache contents that it gets access to. The placement is carried out without knowing the future demands of the users and their association to the caches and also, it satisfies the memory constraint at each helper cache. Let denote the contents stored in all the helper caches.

  3. User-to-cache association phase: In this phase, each user gets connected to one of the helper caches and the set of users assigned to cache is denoted as . The overall user-to-cache association is represented as . All these disjoint sets together form a partition of the set of users and this association of users to helper caches is independent of the cached contents and the subsequent demands. For any user-to-cache association , the association profile describes the number of users accessing each cache. Therefore, where and . Without loss of generality, assume that and each to be an ordered set. Each user in is indexed as . Several user-to-cache associations result in the same . Therefore, each represents a class of . Let the helper cache accessed by any user be denoted as . For any two users and , if then users and are accessing the same cache.

  4. User Cache Placement phase: Once is known to the server, there is an additional phase where the server fills each of the user’s dedicated cache with random keys satisfying the memory constraint. The contents stored in the user’s cache is denoted as and denotes the set of all users’ dedicated cache contents. User having access to and should not get any information about . That is,

    (3)
  5. Delivery Phase: In this phase, each user demands one of the

    files. The indices of the demanded files are denoted by random variables. Let

    be a random variable denoting the user’s demand. Then, is a set of independent random variables, each uniformly distributed over the set . Let be a realization of . Upon receiving the demand vector , the server makes a transmission of size bits over the shared link to the users, where is a function of the association profile , , and . Each user must be able to decode its demanded file using the transmission and its available cache contents and and should not obtain any information about the remaining files. That is,

    (4)
    (5)

For a given association profile , the worst-case rate corresponds to . We aim to minimize the worst-case rate under the decodability and secrecy conditions mentioned in (4) and (5), respectively.

Definition 3.

For the above shared cache setting, a memory-rate pair is said to be secretively achievable if there exists a scheme for the memory point that satisfies the decodability condition in (4) and the secrecy condition in (5) with a rate less than or equal to for every possible realization of . The optimal rate-memory trade-off under secrecy condition is defined as

Iv Main Results

Before presenting the main results, we first discuss the relevance of the dedicated user cache in our setting and show that the user cache must have a size of at least one file to ensure secrecy in any achievable coded caching scheme for shared cache system. In a shared cache network, several users will be sharing the same cache contents, hence multicasting opportunities cannot be created amongst the users accessing the same helper cache. Consider user connected to the helper cache . The transmissions that are useful for the user can be decoded by the remaining users in . Therefore, to ensure secrecy for the file content that user has requested against the users in

, the transmissions need to be encrypted using one-time pads that are known to only user

and unknown to other users in . To store these random keys, each user needs a dedicated memory unit in addition to the helper cache that it is accessing. As mentioned earlier, each user cache has a capacity to store files, and the condition needs to be satisfied to achieve a secretive coded caching scheme for shared caches. The formal proof of it is given below. Consider a cache which has more than one user connected to, say . Let be a demand vector where only the users in demand a file, and let be the corresponding transmission made by the server. Choose a user . Then,

(6a)
(6b)
(6c)
(6d)

where (6a) follows from (4), (6b) and (6c

) follow from the chain rule of mutual information and (

6d) follows from (5). Thus, we obtain . It is sufficient to consider as unity as users’ individual caches are used only for storing the random keys that are used to encrypt those transmissions in which the user is involved. Therefore, in our further discussion, we fix , as taken in [10] and the shared caching problem described in Section III is referred to as shared caching problem henceforth. The following theorem presents a secretive coded caching scheme for shared caches obtained using PDAs.

Theorem 1.

For a given shared caching problem with an association profile , a secretive coded caching scheme with sub-packetization level can be derived from a PDA satisfying . The secretively achievable worst-case rate is obtained as

(7)

where, , .

The scheme that achieves performance in (7) is presented in Section V. Note that the rate varies according to the association profile for a given shared caching problem. For a uniform profile, the rate

is minimum and as the profile becomes more and more skewed, the rate increases. This follows from the description of the scheme.

Corollary 1.

For a uniform association profile, , the worst-case rate becomes

(8)
Proof.

When the profile is uniform, , . Thus, (7) reduces to the expression in (8) . ∎

The following theorem provides an information-theoretic lower bound on the rate achievable by any secretive coded caching scheme for shared cache networks.

Theorem 2.

For any , and , the achievable secretive rate for a shared cache system is lower bounded by

(9)

The proof is given in Section VI. The lower bound expression in (LABEL:eq:cutset) has a parameter , which indicates the number of users under consideration. The term corresponds to the cache to which the user is connected to. In our setting, is fixed as unity and we define as .

Corollary 2.

For any , and , the achievable secretive rate is lower bounded by

(10)
Proof.

The lower bound in (10) follows directly from (LABEL:eq:cutset) after letting . ∎

When , the server generates a set of independent random keys , each uniformly distributed over . Then in the user cache placement phase, the key is placed in the user’s cache. That is, . In the delivery phase, the server transmits , to satisfy the demands of the users. Thus, we obtain . It is straightforward to see that the conditions in (4) and (5) are satisfied by the above transmissions. Therefore, for . The following theorem demonstrates the order-optimality of the obtained scheme.

Theorem 3.

For and , if , the rate achieved by the secretive coded caching scheme obtained from PDAs is within the optimal rate by a factor which is a system parameter. i.e.,

(11)

The proof is given in Section VI.

V Secretive Coded Caching Scheme for Shared caches using PDAs

In this section, we present a procedure to obtain secretive coded caching scheme for shared caches using PDAs. Consider a shared cache network shown in Fig. 1. For the given shared caching problem, choose a PDA such that . The four phases involved are described below.

V-a Helper Cache Placement Phase

The server first splits each file in into non-overlapping subfiles such that each subfile is of size, bits. Then, each file is encoded using a non-perfect secret sharing scheme. The shares of the file are denoted by , where and , . Let denote the set of shares of all the files. In the PDA , the rows represent the shares and the columns represent the helper caches. The placement of the shares in the helper caches is defined by the symbol ‘’ in the corresponding column. That is,

(12)

By Condition in Definition 1, each helper cache stores some shares of all the files such that the memory constraint is satisfied.

V-B User-to-cache Association Phase

In this phase, each one of the users gets connected to one of the helper caches. Once the user-to-cache association and the profile are known, construct an array, of size from as described in lines - in Algorithm . The array is a generalized placement delivery array defined in [18]. In

, the numerical entries are an ordered pair, which come from a subset of

, where . Each column in corresponds to a user and the symbol ‘’ represents the shares that are available to each user. Thus, each user gets access to some shares of all the files but no information is gained about any of the files from these shares. This follows from the secret sharing scheme that we employ.

V-C User Cache Placement Phase

To ensure the secrecy constraint while retrieving the demanded file, some keys need to be stored privately in each user’s cache which are essential for encrypting the transmissions. Since each user wants to decode the remaining shares of its demanded file, it needs to store independently and uniformly generated random keys of size . Thus, bits, which is in accordance with the memory constraint assumed for the user cache. The key which is used to encrypt a particular transmission will be stored in all those users’ caches which are involved in that transmission. Hence, the number of distinct keys that the server generates is same as the number of distinct ordered pairs present in and each key is indexed by an ordered pair (line of Algorithm ). As mentioned above, each user stores keys which are indexed by the ordered pair present in the column (described in lines - of Algorithm ).

1:procedure Construction of ()
2:     Obtain , .
3:     Construct ,
4:     
5:     for  do
6:         if  then
7:              for  do
8:                  
9:                  for  do
10:                       if   then
11:                           .
12:                           .
13:                       end if
14:                  end for
15:                  
16:              end for
17:         end if
18:     end for
19:end procedure
20:procedure User Cache Placement()
21:     For every distinct in , server generates an independent random key , uniformly distributed over .
22:     for  do
23:         for  do
24:              
25:              
26:         end for
27:     end for
28:end procedure
Algorithm 2 User Cache Placement and Construction of from the PDA

Input: , ,

1:for  do
2:     for  do
3:         if  exists then
4:              Server sends
5:         end if
6:     end for
7:end for
Algorithm 3 Delivery Procedure

V-D Delivery Phase

In the delivery phase, users’ inform their demands to the server. Let be a demand vector. Consider the worst-case scenario where all the demands are distinct. The server transmits a message corresponding to every distinct ordered pair in . Let denote the number of times occurs in . Assume . Then, the sub-array formed by the rows and the columns is equivalent to a scaled identity matrix up to row or column permutations [18] as shown in (13).

(13)

Each user , has the key and the set of shares wanted by other users. Hence, the server transmits a message of the form for every distinct in The delivery procedure is described in Algorithm 3.

V-E Decoding

The decoding of the shares from the transmissions follows directly from (13). Each user has access to shares and obtains the remaining shares of its desired file from the transmissions, by using the helper cache contents and the keys that are stored privately in its cache. Hence, each user can retrieve its demanded file from its shares as mentioned in Definition 2.

V-F Proof of Secrecy

Consider a user , and its accessible cache contents and . According to the placement procedure described in Section V-A and Section V-C, the helper cache contents consist of some shares of all the files and is constituted by independently and uniformly generated random keys which are used for one-time padding. By virtue of the secret sharing scheme that is used, the shares of a file do not reveal any information about it and the shares of one file are independent of the other. Therefore, having access to all the shares of one file do not convey any information about other files as well. Thus, we obtain:

(14)
(15)

V-G Calculation of Rate

Now, we calculate the required transmission rate in the worst-case scenario. According to the delivery procedure summarized in Algorithm , there is a transmission corresponding to every distinct in and each transmission is of size bits. For each , the value that takes is different as it depends on the association profile . Assume appears times in the PDA and let be the set of column indices in which occurs. Then, the number of transmissions in which is involved depends on the number of users connected to the most populated cache amongst the above set of helper caches. The most populated cache in the above set corresponds to the minimum of as it is assumed that the helper caches are labelled in a non-increasing order of the number of users accessing it. Thus, we obtain the normalized rate as

where, is defined as the minimum column index in which appears in the PDA . This concludes the proof of Theorem 1.

V-H Example: shared caching problem with .

Consider a setting with a server having access to files , each of size bits. The server is connected to users, each possessing a cache of size file. There are helper caches, each of size equal to files. For this setting, we start with a PDA given in (16), which satisfies the condition .

(16)

Each file gets splits into subfiles , each of size bits. Then, each file is encoded using a non-perfect secret sharing scheme. To generate the shares of a file, first form an column vector comprised of subfiles and independent random keys , each uniformly distributed over . Then, pre-multiply it with the parity check matrix of a MDS code over , where is sufficiently large such that the MDS code exists. In this example, we consider a Cauchy matrix over . Thus, the four shares of the file , are obtained as follows:

The contents stored in each helper cache are:

Let the user-to cache association be with . The generalized PDA is obtained as given in (17). Each user has access to shares of each file, but the helper cache contents do not reveal any information about due to the non-perfect secret sharing encoding.

(17)
(a) Association profile
(b) Association profile
Fig. 2: Rate-memory trade-off of a shared cache network with , , .

Corresponding to each distinct ordered pair in , the server generates independent random keys, each uniformly distributed over and is indexed by an ordered pair. The keys stored in each user’s cache are:

In the delivery phase, each user requests a file from the server. Then, the messages transmitted are as follows: