I Introduction
The advent of smart devices accompanied by the rise of ondemand streaming services and contentbased applications has led to a dramatic increase in wireless data traffic over the last two decades. Coded caching has been proposed as a promising technique to reduce the traffic congestion experienced during peak hours by exploiting the memory units distributed across the network. The idea of coded caching was first introduced in the work of MaddahAli and Niesen [1], which emphasized the benefits and need of the joint design of storage and delivery policies in content delivery networks. In [1], the setting considered is that of a single server having access to a library of equal length files and connected to users through an errorfree shared link. Each user is equipped with a dedicated cache of size files. The caches are populated with portions of file contents during offpeak times without knowing the future demands of the users, and this is called the placement phase. The delivery phase happens at peak times, during which the users inform their demands to the server, and the server aims to satisfy the users’ demands with minimum transmission load over the shared link. Each user is able to recover its demanded files using the received messages and the cache contents. The sum of each transmitted message’s length normalized with respect to the file length is defined as the rate of the coded caching scheme. The objective of any coded caching problem is to jointly design the placement and delivery phases such that the rate required to satisfy the users’ demands is minimum. The coded caching scheme in [1], which is referred to as the MN scheme henceforth, was shown to be optimal under the constraint of uncoded placement when in [2], [3]. In [2], the MN scheme was modified to obtain another scheme that was optimal for the case as well. The coded caching approach has been extended to a variety of settings that include decentralized caching [4], multiaccess network [5], shared cache network [6] and many more. In [7], for the same setting considered in [1], an additional constraint was incorporated, which ensures that no user can obtain any information about the database files other than its demanded file either from the cache contents or the server transmissions. This setup was referred to as private or secretive coded caching in the literature [7], [8]. We resort to the latter terminology in this work. In [7], an achievable secretive coded caching scheme was proposed for both centralized and decentralized settings. The scheme in [7] also guarantees secure delivery against external eavesdroppers. The secure delivery condition was addressed separately in [9]. The secretive coded caching problem was then extended to other settings that include shared cache networks [10] devicetodevice networks [11], combination networks [12] and considering collusion among users in dedicated cache network [13]. We consider the problem of secretive coded caching with shared caches introduced in [10]. The shared cache network introduced in [6] consists of a server with equal length files and is connected to users with the assistance of helper caches as shown in Fig. 1. Each user has access to only one cache, and each cache can serve an arbitrary number of users. But to ensure the secrecy condition in the shared cache network, each user is required to have a dedicated cache of size at least one unit of file, and this is the setting considered in [10]. The centralized secretive coded caching scheme presented in [7] uses the idea of secret sharing [14],[15] and is derived from the MN scheme. Hence, the scheme requires a subpacketization levelthe number of smaller parts to which a file is split into which is in the exponential order of . Therefore, the scheme requires splitting of finite length files into an exponential number of packets resulting in a scenario where the overhead bits involved in each transmission outnumber the data bits present in it, and this limits the practical applicability of the scheme. In [16], Yan et al. showed that combinatorial structures called Placement Delivery Arrays (PDAs) can be used to design coded caching schemes for dedicated cache networks having low subpacketization levels. For dedicated cache setup, secretive coded caching schemes with reduced subpacketization levels were obtained from PDAs in [17]. The exponentially growing subpacketization level with respect to the number of caches is pervasive in shared cache systems as well and is evident from the secretive coded caching scheme proposed for shared cache networks in [10]. In [10], the subpacketization level required is in the exponential order of . Even for moderate network sizes, the subpacketization level required by the scheme in [10] turns out to be high. Since shared cache networks better capture more realistic settings and maintaining the confidentiality of the data is essential in several applications such as paid subscription services, it is necessary to look for practically realizable secretive coded caching schemes for shared caches. In [18], PDAs were used to derive nonsecretive coded caching schemes for shared cache networks having low subpacketization levels than the optimal scheme in [6]. In this work, we identify new secretive coded caching schemes for shared caches using PDAs having lower subpacketization requirements than the scheme in [10]. Further, we characterize the performance of our scheme by deriving a cutset based lower bound for the shared cache setting after incorporating the secrecy condition.
Ia Contributions
In this work, we study the secretive coded caching with shared cache networks. Our contributions are summarized below:

A lower bound on the optimal ratememory tradeoff of secretive coded caching with shared caches is derived using cutset based arguments and characterizes the multiplicative gap between the achievable rate and the lower bound (Section VI).
The rest of the paper is organized as follows. In Section II, we briefly discuss some of the topics that are relevant for our scheme description. We describe the problem setup and present the main results in Sections III and IV, respectively. In Section V, we describe the proposed scheme and in Section VI, the lower bound and the order optimality of the scheme are presented. Section VII summarizes our results. Notations: For a positive integer , denotes the set . For any set , denotes the cardinality of . Binomial coefficients are denoted by , where and is zero for
. Bold uppercase and lowercase letters denote matrices and vectors, respectively. For a vector
, denotes the vector consisting the elements in at positions specified by the elements in the set . The columns of an matrix is denoted by ,. An identity matrix of size
is denoted as . The finite field with elements is denoted by .Ii Preliminaries
In this section, we briefly review PDAs and secret sharing schemes which are required for describing our scheme.
Iia Placement Delivery Array (PDA)
Definition 1.
([16]) For positive integers and , an array , and , composed of a specific symbol and positive integers , is called a placement delivery array (PDA) if it satisfies the following three conditions:
C1. The symbol appears times in each column.
C2. Each integer occurs at least once in the array.
C3. For any two distinct entries and , is an integer only if

[label=()]

, , i.e., they lie in distinct rows and distinct columns, and

, i.e., the corresponding subarray formed by rows and columns must be of the following form:
or
Every PDA corresponds to a coded caching scheme for dedicated cache network with parameters and as in Lemma 1.
Lemma 1.
([16]) For a given PDA , a coded caching scheme can be obtained with subpacketization level and using Algorithm 1. For any distinct demand vector , the demands of all the users are met with a rate, .
In a PDA , the rows represent packets and the columns represent users. For any if , then it implies that the user has access to the packet of all the files. The contents placed in the user’s cache is denoted by in Algorithm 1. If is an integer, then it means that the user does not have access to the packet of any of the files. Condition guarantees that all users have access to some packets of all the files. According to the delivery procedure in Algorithm , the server sends a linear combination of the requested packets indicated by the integer in the PDA. Therefore, condition implies that the number of messages transmitted by the server is exactly , and the rate achieved is . Condition ensures the decodability.
IiB Secret Sharing Schemes
The secretive coded caching schemes proposed so far in the literature rely on nonperfect secret sharing schemes. We also utilize the same in our scheme. The primary idea behind nonperfect sharing scheme is to encode the secret in such a way that accessing a subset of shares does not reveal any information about the secret and only accessing all the shares enable to recover the secret completely. The formal definition of the nonperfect secret sharing scheme is given below.
Definition 2.
([14]) For a secret with size bits and , an nonperfect secret sharing scheme generates equalsized shares such that accessing any shares does not reveal any information about the secret and can be completely reconstructed from all the shares. i.e,
(1a)  
(1b) 
In the nonperfect secret sharing scheme, the size of each share should be at least bits [7]. For large enough , there exists nonperfect secret sharing schemes with size of each share being equal to bits.
Iii Problem Setup
We consider a shared cache network as illustrated in Fig 1. There is a central server with a library of independent files , each of size
bits and is uniformly distributed over
. The server is connected to users through an errorfree broadcast link, and there are helper caches, each of size equal to files. Each user gets connected to one helper cache and there is no limit on the number of users served by each helper cache. Further, each user has a dedicated cache of size files. The network operates in four phases as in [10]:
[label=()]

Helper Cache Placement phase: Let the contents stored in the helper cache be denoted as , where . The server fills each of the helper caches with functions of the library files and some randomness of appropriate size, such that
(2) Equation (2) implies that no user is able to retrieve any information regarding any of the files from the cache contents that it gets access to. The placement is carried out without knowing the future demands of the users and their association to the caches and also, it satisfies the memory constraint at each helper cache. Let denote the contents stored in all the helper caches.

Usertocache association phase: In this phase, each user gets connected to one of the helper caches and the set of users assigned to cache is denoted as . The overall usertocache association is represented as . All these disjoint sets together form a partition of the set of users and this association of users to helper caches is independent of the cached contents and the subsequent demands. For any usertocache association , the association profile describes the number of users accessing each cache. Therefore, where and . Without loss of generality, assume that and each to be an ordered set. Each user in is indexed as . Several usertocache associations result in the same . Therefore, each represents a class of . Let the helper cache accessed by any user be denoted as . For any two users and , if then users and are accessing the same cache.

User Cache Placement phase: Once is known to the server, there is an additional phase where the server fills each of the user’s dedicated cache with random keys satisfying the memory constraint. The contents stored in the user’s cache is denoted as and denotes the set of all users’ dedicated cache contents. User having access to and should not get any information about . That is,
(3) 
Delivery Phase: In this phase, each user demands one of the
files. The indices of the demanded files are denoted by random variables. Let
be a random variable denoting the user’s demand. Then, is a set of independent random variables, each uniformly distributed over the set . Let be a realization of . Upon receiving the demand vector , the server makes a transmission of size bits over the shared link to the users, where is a function of the association profile , , and . Each user must be able to decode its demanded file using the transmission and its available cache contents and and should not obtain any information about the remaining files. That is,(4) (5)
For a given association profile , the worstcase rate corresponds to . We aim to minimize the worstcase rate under the decodability and secrecy conditions mentioned in (4) and (5), respectively.
Definition 3.
For the above shared cache setting, a memoryrate pair is said to be secretively achievable if there exists a scheme for the memory point that satisfies the decodability condition in (4) and the secrecy condition in (5) with a rate less than or equal to for every possible realization of . The optimal ratememory tradeoff under secrecy condition is defined as
Iv Main Results
Before presenting the main results, we first discuss the relevance of the dedicated user cache in our setting and show that the user cache must have a size of at least one file to ensure secrecy in any achievable coded caching scheme for shared cache system. In a shared cache network, several users will be sharing the same cache contents, hence multicasting opportunities cannot be created amongst the users accessing the same helper cache. Consider user connected to the helper cache . The transmissions that are useful for the user can be decoded by the remaining users in . Therefore, to ensure secrecy for the file content that user has requested against the users in
, the transmissions need to be encrypted using onetime pads that are known to only user
and unknown to other users in . To store these random keys, each user needs a dedicated memory unit in addition to the helper cache that it is accessing. As mentioned earlier, each user cache has a capacity to store files, and the condition needs to be satisfied to achieve a secretive coded caching scheme for shared caches. The formal proof of it is given below. Consider a cache which has more than one user connected to, say . Let be a demand vector where only the users in demand a file, and let be the corresponding transmission made by the server. Choose a user . Then,(6a)  
(6b)  
(6c)  
(6d)  
where (6a) follows from (4), (6b) and (6c
) follow from the chain rule of mutual information and (
6d) follows from (5). Thus, we obtain . It is sufficient to consider as unity as users’ individual caches are used only for storing the random keys that are used to encrypt those transmissions in which the user is involved. Therefore, in our further discussion, we fix , as taken in [10] and the shared caching problem described in Section III is referred to as shared caching problem henceforth. The following theorem presents a secretive coded caching scheme for shared caches obtained using PDAs.Theorem 1.
For a given shared caching problem with an association profile , a secretive coded caching scheme with subpacketization level can be derived from a PDA satisfying . The secretively achievable worstcase rate is obtained as
(7) 
where, , .
The scheme that achieves performance in (7) is presented in Section V. Note that the rate varies according to the association profile for a given shared caching problem. For a uniform profile, the rate
is minimum and as the profile becomes more and more skewed, the rate increases. This follows from the description of the scheme.
Corollary 1.
For a uniform association profile, , the worstcase rate becomes
(8) 
The following theorem provides an informationtheoretic lower bound on the rate achievable by any secretive coded caching scheme for shared cache networks.
Theorem 2.
For any , and , the achievable secretive rate for a shared cache system is lower bounded by
(9)  
The proof is given in Section VI. The lower bound expression in (LABEL:eq:cutset) has a parameter , which indicates the number of users under consideration. The term corresponds to the cache to which the user is connected to. In our setting, is fixed as unity and we define as .
Corollary 2.
For any , and , the achievable secretive rate is lower bounded by
(10) 
Proof.
The lower bound in (10) follows directly from (LABEL:eq:cutset) after letting . ∎
When , the server generates a set of independent random keys , each uniformly distributed over . Then in the user cache placement phase, the key is placed in the user’s cache. That is, . In the delivery phase, the server transmits , to satisfy the demands of the users. Thus, we obtain . It is straightforward to see that the conditions in (4) and (5) are satisfied by the above transmissions. Therefore, for . The following theorem demonstrates the orderoptimality of the obtained scheme.
Theorem 3.
For and , if , the rate achieved by the secretive coded caching scheme obtained from PDAs is within the optimal rate by a factor which is a system parameter. i.e.,
(11) 
The proof is given in Section VI.
V Secretive Coded Caching Scheme for Shared caches using PDAs
In this section, we present a procedure to obtain secretive coded caching scheme for shared caches using PDAs. Consider a shared cache network shown in Fig. 1. For the given shared caching problem, choose a PDA such that . The four phases involved are described below.
Va Helper Cache Placement Phase
The server first splits each file in into nonoverlapping subfiles such that each subfile is of size, bits. Then, each file is encoded using a nonperfect secret sharing scheme. The shares of the file are denoted by , where and , . Let denote the set of shares of all the files. In the PDA , the rows represent the shares and the columns represent the helper caches. The placement of the shares in the helper caches is defined by the symbol ‘’ in the corresponding column. That is,
(12) 
By Condition in Definition 1, each helper cache stores some shares of all the files such that the memory constraint is satisfied.
VB Usertocache Association Phase
In this phase, each one of the users gets connected to one of the helper caches. Once the usertocache association and the profile are known, construct an array, of size from as described in lines  in Algorithm . The array is a generalized placement delivery array defined in [18]. In
, the numerical entries are an ordered pair, which come from a subset of
, where . Each column in corresponds to a user and the symbol ‘’ represents the shares that are available to each user. Thus, each user gets access to some shares of all the files but no information is gained about any of the files from these shares. This follows from the secret sharing scheme that we employ.VC User Cache Placement Phase
To ensure the secrecy constraint while retrieving the demanded file, some keys need to be stored privately in each user’s cache which are essential for encrypting the transmissions. Since each user wants to decode the remaining shares of its demanded file, it needs to store independently and uniformly generated random keys of size . Thus, bits, which is in accordance with the memory constraint assumed for the user cache. The key which is used to encrypt a particular transmission will be stored in all those users’ caches which are involved in that transmission. Hence, the number of distinct keys that the server generates is same as the number of distinct ordered pairs present in and each key is indexed by an ordered pair (line of Algorithm ). As mentioned above, each user stores keys which are indexed by the ordered pair present in the column (described in lines  of Algorithm ).
VD Delivery Phase
In the delivery phase, users’ inform their demands to the server. Let be a demand vector. Consider the worstcase scenario where all the demands are distinct. The server transmits a message corresponding to every distinct ordered pair in . Let denote the number of times occurs in . Assume . Then, the subarray formed by the rows and the columns is equivalent to a scaled identity matrix up to row or column permutations [18] as shown in (13).
(13) 
Each user , has the key and the set of shares wanted by other users. Hence, the server transmits a message of the form for every distinct in The delivery procedure is described in Algorithm 3.
VE Decoding
The decoding of the shares from the transmissions follows directly from (13). Each user has access to shares and obtains the remaining shares of its desired file from the transmissions, by using the helper cache contents and the keys that are stored privately in its cache. Hence, each user can retrieve its demanded file from its shares as mentioned in Definition 2.
VF Proof of Secrecy
Consider a user , and its accessible cache contents and . According to the placement procedure described in Section VA and Section VC, the helper cache contents consist of some shares of all the files and is constituted by independently and uniformly generated random keys which are used for onetime padding. By virtue of the secret sharing scheme that is used, the shares of a file do not reveal any information about it and the shares of one file are independent of the other. Therefore, having access to all the shares of one file do not convey any information about other files as well. Thus, we obtain:
(14)  
(15) 
VG Calculation of Rate
Now, we calculate the required transmission rate in the worstcase scenario. According to the delivery procedure summarized in Algorithm , there is a transmission corresponding to every distinct in and each transmission is of size bits. For each , the value that takes is different as it depends on the association profile . Assume appears times in the PDA and let be the set of column indices in which occurs. Then, the number of transmissions in which is involved depends on the number of users connected to the most populated cache amongst the above set of helper caches. The most populated cache in the above set corresponds to the minimum of as it is assumed that the helper caches are labelled in a nonincreasing order of the number of users accessing it. Thus, we obtain the normalized rate as
where, is defined as the minimum column index in which appears in the PDA . This concludes the proof of Theorem 1.
VH Example: shared caching problem with .
Consider a setting with a server having access to files , each of size bits. The server is connected to users, each possessing a cache of size file. There are helper caches, each of size equal to files. For this setting, we start with a PDA given in (16), which satisfies the condition .
(16) 
Each file gets splits into subfiles , each of size bits. Then, each file is encoded using a nonperfect secret sharing scheme. To generate the shares of a file, first form an column vector comprised of subfiles and independent random keys , each uniformly distributed over . Then, premultiply it with the parity check matrix of a MDS code over , where is sufficiently large such that the MDS code exists. In this example, we consider a Cauchy matrix over . Thus, the four shares of the file , are obtained as follows:
The contents stored in each helper cache are:
Let the userto cache association be with . The generalized PDA is obtained as given in (17). Each user has access to shares of each file, but the helper cache contents do not reveal any information about due to the nonperfect secret sharing encoding.
(17) 
Corresponding to each distinct ordered pair in , the server generates independent random keys, each uniformly distributed over and is indexed by an ordered pair. The keys stored in each user’s cache are:
In the delivery phase, each user requests a file from the server. Then, the messages transmitted are as follows:
Comments
There are no comments yet.