Multi-access Coded Caching Schemes From Cross Resolvable Designs

05/28/2020 ∙ by Digvijay Katyal, et al. ∙ indian institute of science 0

We present a novel caching and coded delivery scheme for a multi-access network where multiple users can have access to the same cache (shared cache) and any cache can assist multiple users. This scheme is obtained from resolvable designs satisfying certain conditions which we call cross resolvable designs. To be able to compare different multi-access coded schemes with different number of users we normalize the rate of the schemes by the number of users served. Based on this per-user-rate we show that our scheme performs better than the well known Maddah-Ali - Nieson (MaN) scheme and the recently proposed ("Multi-access coded caching : gains beyond cache-redundancy" by Serbetci, Parrinello and Elia) SPE scheme. It is shown that the resolvable designs from affine planes are cross resolvable designs and our scheme based on these performs better than the MaN scheme for large memory size cases. The exact size beyond which our performance is better is also presented. The SPE scheme considers only the cases where the product of the number of users and the normalized cache size is 2, whereas the proposed scheme allows different choices depending on the choice of the cross resolvable design.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Caching techniques help to reduce data transmissions during the times of high network congestion by prefetching parts of popularly demanded contents into the memories of end users. The seminal work of [1] provided a coded delivery scheme which performed within a constant factor of the information-theoretic optimum for all values of the problem parameters.

The idea of a placement delivery array PDA to represent the placement and delivery phase of a coded caching problem first appeared in [2]. These PDAs could represent any coded caching problem with symmetric prefetching and the popular Ali-Niesen scheme was also found to be a special case. Since then many different constructions of PDA have been put forth achieving low subpacketization levels[3].

Recently, placement delivery arrays have found applications in different variants of coded caching scenarios like in Device to device (D2D) networks as D2D placement delivery array (DPDA) and in Combination networks as combinational PDA (C-PDA) [4, 5].

Most of the works on coded caching consider scenarios where each user has its own dedicated cache. However in a variety of settings, such as different cellular networks, multiple users share a single cache or users can conceivably connect to multiple caches whose coverage areas may overlap. The possibility of users to access more than one cache was first addressed in [6]. This was motivated by the upcoming heterogeneous cellular architecture which will contain a dense deployment of wireless access points with small coverage and relatively large data rates, in addition to sparse cellular base stations with large coverage area and small data rates. Placing cache at local access points could significantly reduce the base station transmission rate, with each user being able to access content at multiple access points along with the base station broadcast. The work in [6] considered a K-user shared-link broadcast channel where each user is assisted by exactly caches (with a cyclic wrap around), and where each cache can serve exactly users. The authors called this as multi-access problem and derive an achievable rate and information theoretic lower bound which differs by a multiplicative gap scaling linearly with . Later in [7, 9], new bounds for the optimal rate-memory trade-off were derived for the same problem. Also a new achievable rate for general multi-access setup which is order-wise better than the rate in [6] is derived. The authors focus on the special case with and provide a general lower bound on the optimal rate and establish its order optimal memory rate trade off under the restrictions of uncoded placement. For few special cases like with even, exact optimal uncoded memory-rate trade off is derived.

The work of Serbetci et al.[8] gives yet another class of caching and coded delivery schemes for the multi-access setup, where each user in a -user shared link broadcast channel is connected to caches (with a cyclic wrap around), and where each cache can serve exactly users. This was the first instance where authors have analyzed this scenario in the context of worst case delivery time and when the number of files in server database is greater than equal to the number of users, the proposed scheme experiences a larger gain than in [1].

In [11] authors consider the shared cache scenario where multiple users share the same cache memory, and each user is connected to only one cache. The work in [11] considers the shared link network, with users and helper caches, where each cache is assisting arbitrary number of distinct users. Also each user is assigned to single cache. For this set up, the authors identify the fundamental limits under the assumption of uncoded placement and any possible user to cache association profile. The authors derive the exact optimal worst-case delivery time.

In [12], the authors propose a coded placement scheme for the setup where the users share the end-caches and showed that the proposed scheme outperforms the scheme in [11]. In this scheme the authors use both coded and uncoded data at the caches, taking into consideration the users connectivity pattern. Firstly, for a two-cache system, authors provided an explicit characterization of the gain from coded placement, then the scheme is extended to

cache systems, where authors obtain optimal parameters for the caching scheme by solving a linear program.

The schemes so far mentioned above in context of shared caches/multi-access scenarios consider the special framework with cyclic wrap around, ensuring that intersection of all caches that a user is connected to is empty. In [13], the authors addressed a system model involving a cache sharing strategy where the set of users served by any two caches is no longer empty. In [13], the authors consider the caching problem with shared caches, consisting of a server connected to users through a shared link, where a pair of users share two caches.

Figure 1: Problem setup for multi-access coded caching with users, caches and each user, connected to caches.

Various combinatorial designs have been used in different setups of coded caching [14, 16, 17]. To the best of our knowledge this is the first work that uses designs for multi-access coding caching.

I-a Multi-access Coded Caching - System Model

Fig. 1 shows a multi-access coded caching system with a unique server storing files ,,,…, each of unit size. There are users in the network connected via an error free shared link to the server The set of users is denoted by There are number of helper caches each of size files. Each user has access to out of the helper caches. Let denotes the content in the -th cache. It is assumed that each user has an unlimited capacity link to the caches it is connected to.

There are two phases: the placement phase and the delivery phase. During the placement phase certain parts of each file are stored in each cache which is carried out during the off-peak hours. During the peak hours each user demands a file and the server broadcasts coded transmissions such that each user can recover its demand by combining the received transmissions with what has been stored in the caches it has access to. This is the delivery phase. The coded caching problem is to jointly design the placements and the delivery with minimal number of transmissions to satisfy the demand of all the users. The amount of transmissions used in the unit of files is called the rate or the delivery time. Subpacketization level is the number of packets that a file is divided into. Coding gain is defined as the number of users benefited in a transmission.

I-B The Maddah-Ali Nieson (MaN) Coded Caching Scheme

The framework of the seminal paper [1] considers a network with users, each equipped with memory of size and files of very large size among which each user is likely to demand any one file. The rate achieved is

The factor which is originally call global caching gain is also known as the coding gain or the Degrees of Freedom (DoF). We refer to this scheme as the MaN scheme henceforth. This original setup can be viewed as a special case of the scheme corresponding to Fig. 1 with and which may be viewed as each user having a dedicated cache of its own.

I-C Serbetci-Parrinello-Elia (SPE) Multi-access Coded Caching Scheme

In [8], a network consisting of users connected via an error free shared link to a server storing files is considered. Each user in the network can access caches out of helper caches, each of size = units of file, where . The setup of this scheme, which we refer henceforth as the SPE scheme, can be considered as a special case of the setup shown in Fig.1 where each user is associated with the caches,

In [8] the authors focus on the special case, and provide a scheme which exceed the Ali-Niesen coding gain . Also, for the special case with access to an integer number = of caches of normalized size , the optimal rate taking the form,

corresponding to a degrees of freedom (DoF) of users served at a time is reported.

I-D Comparing different multi-access coded caching schemes

In any Multi-access Coded caching problem the design parameters are the number of files , number of users , number of caches , the memory size of the cache, files and number of caches a user has access to, . For two Multi-access schemes the number of users may be different depending upon the cache-user topology, keeping other parameters the same. So comparing such two schemes with respect to rate the existing rate may be misleading. Therefore to compare our multi-access scheme with other existing schemes we introduce a new term that normalizes the existing rate with the number of users supported i.e rate per user or per user rate . The lower the rate per user for a given the better the scheme.

I-E Contributions

The contributions of this paper may be summarized as follows:

  • A subclass of resolvable designs (called cross resolvable designs) is identified using which new classes of multi-access coding schemes are presented.

  • To be able to compare different multi-access coded schemes with different number of users we normalize the rate of the schemes by the number of users served. Based on this per-user-rate we show that our scheme performs better than the MaN scheme and the SPE scheme for several cross resolvable designs.

  • It is shown that the resolvable designs from affine planes [15] are cross resolvable designs and our scheme performs better than the MaN scheme for large memory size cases. The exact size beyond which our performance is better is also presented.

  • The SPE scheme [8] considers only the cases where , while the proposed scheme allows different choices of depending on the choice of the cross resolvable design.

The paper is organized as follows. Section II describes all the details related to resolvable designs and defines a subclass of resolvable designs termed in this paper as cross resolvable designs (CRDs). Our proposed scheme associated with CRDs is described in Section III. Comparison of performance of our scheme with the MaN and the SPE schemes constitute Section IV. Concluding remarks constitute Section V and the proof of correctness of our delivery algorithm is given in the Appendix.

Ii Cross Resolvable Designs

We use a class of combinatorial designs called resolvable designs[15] to specify placement in the caches.

Definition 1

[3] A design is a pair such that

  • is a finite set of elements called points, and

  • is a collection of nonempty subsets of called blocks, where each block contains the same number of points.

Definition 2

[3] A parallel class in a design is a subset of disjoint blocks from whose union is . A partition of into several parallel classes is called a resolution, and is said to be a resolvable design if A has at least one resolution.

Example 1

Consider a block design specified as follows.

It can be observed that this design is resolvable with the following parallel classes.

Note that in above example, , , forms a partition of . If = {{1, 2},{1, 3},{3, 4},{2, 4}}, we get another resolvable design with two parallel classes and .

Example 2

Consider a block design specified as follows.

It can be observed that this design is resolvable with the following parallel classes.

For a given resolvable design if || = , || = , block size is and number of parallel classes is , then there are exactly blocks in each parallel class. Since the blocks in each parallel class are disjoint, therefore number of blocks in each parallel class = = .

Ii-a Cross Resolvable Design (CRD)

Definition 3 (Cross Intersection Number)

For any resolvable design with parallel classes, the cross intersection number, where , is defined as the cardinality of intersection of blocks drawn from any distinct parallel classes, provided that, this value remains same (), for all possible choices of blocks.
For instance, in Example 1, = 1, as the intersection of any 2 blocks drawn from 2 distinct parallel classes is always at exactly one point. But we cannot define as the intersection of blocks drawn from 3 distinct parallel classes takes elements from the set
{0, 1}.

Definition 4 (Cross Resolvable Design)

For any resolvable design , if there exist at least one such that the cross intersection number exists, then the resolvable design is said to be a Cross Resolvable Design (CRD). For a CRD the maximum value for for which exists is called the Cross Resolution Number (CRN) for that CRD. A CRD with the CRN equal to is called a Maximal Cross Resolvable Design (MCRD).

Note that the resolvable design in Example 2 is not a CRD as does not exist.

Example 3

For the resolvable design with

the parallel classes are

It is easy to verify that .

Example 4

For the resolvable design with

the parallel classes are

In this case = 2 and = 1.

Example 5

Consider the resolvable design with

The parallel classes are

We have = 3.

Example 6

Consider the resolvable design with

The parallel classes are

Here and , does not exist.

From Example 6 one can see that need not always exist for a CRD.

Lemma 1

For any given CRD with parallel classes and any cross intersection number for we have,

Proof:

From any parallel classes, let us choose a block from each parallel class denoted as . Let

From definition of cross resolvable design,

It is also easy to see that, any where Now,

since .

Iii Proposed Scheme

Given a cross resolvable design with points, parallel classes, blocks of size each, blocks in each parallel class, we choose some such that exists. Let denote the block in , assuming some ordering on the blocks of . We associate a coded caching problem with number of users, files in server database, number of caches, fraction of each file at each cache and subpacketization level A user is connected to distinct caches such that these caches correspond to blocks from distinct parallel classes. We denote the set of users as,

where, is a sized set containing cache indices from distinct parallel classes.

Iii-a Placement Phase

In the caching placement phase, we split each file into non-overlapping subfiles of equal size i.e.

The placement is as follows. In the cache, the indices of the subfiles stored in is the block in the design. We assume symmetric batch prefetching i.e.,

Therefore the total number of subfiles for each file in any cache is block size of the resolvable design i.e. .

Let denote the size of the memory in units of files that a user has access to. We have

where are blocks from distinct parallel classes. Using Lemma 1 we get,

which simplifies to

From the above expression it is clear that for cross resolvable design . All the cases considered in [8, 10] corresponds to the case that i.e., the size of the memory that a user has access to is an integer multiple of the size of the cache. From this it follows that the cases considered in [8] do not intersect with the cases considered in this paper.

Lemma 2

The number of users having access to any particular subfile is exactly .

Proof:

There are possible ways of choosing parallel classes out of parallel classes. Fix some point . In each parallel class there will be blocks that does not contain . So there are totally users which do not have access to . Hence the number of users which have access to is given by . So, taking all possible combinations of parallel classes we have

Lemma 3

The number of users having access to a subfile of a specific cache is exactly .

Proof:

Any user is connected to parallel classes. First we fix a subfile accessible to the user by fixing a cache accessible to the user. Once we fix a cache we also fix a parallel class. Then other parallel classes can be chosen in ways. Since there are blocks in each of these parallel classes, we have total number of users equal to

Iii-B Delivery Phase

For delivery, we first arrange the users in lexicographical order of their indices , establishing a one-to-one correspondence with the set . At the beginning of the delivery phase, each user requests one of the

files and let the demand vector be denoted by

. To derive an upper bound on the required transmission rate, we focus our attention on the worst case scenario i.e. each user requests for distinct files. The delivery steps are presented as an algorithm in Algorithm 1.

1:for  to  do
2:     Choose a set of out of parallel classes
3:     which is different from the sets chosen before.
4:     Let this set be
5:     for  to  do
6:         Choose a pair of blocks from each of the
7:         parallel classes . This set of
8:         blocks must be different from the ones chosen
9:         before. Let the chosen set be
10:         where, .
11:         There are users corresponding to the blocks
12:         chosen above. Denoting this set of user indices by
13:          we have
14:         Calculate: Calling the user connected to the
15:         set of caches ,
16:         where to be
17:         the user calculate the set as
18:         where We have . Let
19:         Calculate as above for all the users in
20:         Transmit: Now do the following transmissions
21:         Note that there are transmissions for
22:     end for
23:end for
Algorithm 1 Algorithm for Delivery

The proof of correctness of the Algorithm 1 is given in the Appendix. We have

Theorem 1

For files and users each with access to caches of size in the considered caching system, if it and for the distinct demands by the users, the proposed scheme achieves the rate given by

Proof:

The first for loop of the delivery algorithm runs times. The second for loop of the delivery algorithm runs times. The transmit step of the delivery algorithm runs times. So we see that totally there are transmissions and subpacketization level is . Hence, the result from the definition of rate.

Lemma 4

Number of users benefited in each transmission, known in the literature as coding gain and denoted by is given by

Proof:

From second for of the delivery algorithm it can be observed that, there are totally users benefited from a transmission. So the coding gain by definition is .

Iv Performance Comparison

For both the MaN and the SPE schemes the number of users and the number of caches are same. Whereas for the schemes proposed in this paper the number of users and the number of caches need not be same. So, when we compare our scheme with the MaN or the SPE scheme we will compare taking the number of caches being equal and also the number of caches a user can access also being same.

Iv-a Comparison of our schemes obtained from CRDs from Affine Planes

In this subsection we focus on the schemes obtained using the resolvable designs from affine planes [15] which are CRDs. We will compare the resulting schemes with the MaN scheme. Such CRDs exists for all where is a prime or a power of a prime number. For an the CRD resulting from an affine plane has the number of points the number of points in a block the number of blocks the number of parallel classes It is known that any two blocks drawn from different parallel classes always intersect at exactly one point [15] i.e., and

For the multi-access coded caching scheme from the CRD with parameter with files we have users, caches each having a cache size of and

Iv-A1 Comparison with the MaN Scheme

Since is a integer, we have the corresponding MaN scheme with files, number of users and each user has fraction of each file stored at its corresponding cache. The two schemes have been compared by keeping the number of caches and fractions of each file at each cache equal in Table I. It is seen that our scheme performs better than the MaN scheme in terms of the number of users supported and the subpacketization level at the cost of increased rate and decreased gain. Since the fraction of each file at each cache is the same in both the schemes, in Fig.2 the per user rate is plotted against from which it is clear that our proposed scheme performs better than the MaN scheme for large cache sizes (approximately 0.3 onwards). For smaller cache sizes the MaN scheme performs better.

Parameters MaN Scheme Proposed Scheme
Number of Caches
Fraction of each file
at each cache
Number of Users
Subpacketization level
Rate
Gain
Table I: Comparison between the MaN and proposed scheme for the class of CRDs from affine planes where is a prime or prime power.
Figure 2: Performance Analysis between MaN and Proposed scheme for the class of cross resolvable design derived from affine planes for the case , where n is a prime or prime power.

Iv-A2 Comparison with the SPE scheme

For the subpacketization value to be an integer in [8], given by , where is the number of users and is the number of caches a user has access to, we consider the SPE scheme with files, users, fraction of each file stored at each cache and each user has access to exactly caches, which gives a subpacketization level equal to . Since is an integer, we have the comparable SPE scheme with files, number of users. The comparison is given in Table II and also shown in Fig.3. The rate expression in [8] is complicated (given in Theorem , page of [8]). So for comparison with the proposed scheme we plot the rate versus the number of users for the two schemes in Fig. 3. It is seen that our scheme outperforms the SPE scheme in the number of users allowed, subpacketization level, rate as well as in gain at the cost of increase in the fraction of each file stored in each cache. In Fig.4 we plot the per user rate against the fraction of each file stored at each cache and see that our scheme performs better than the SPE scheme for small cache sizes and matches the performance at large cache sizes.

Parameters SPE Scheme Proposed Scheme
Number of Caches
Number of Caches a
user has access to
Number of Users
Fraction of each file
at each cache
Fraction of each file each
user has access to
Subpacketization level
Rate (R) See Fig. 3 See Fig.3
Gain between and
Table II: Comparison between SPE and Proposed scheme for the class of CRDs from affine planes where is a prime or prime power.
Figure 3: Rate for the schemes in Table II
Figure 4: Per user rate for the codes in Table II

Iv-B Two examples outperforming the MaN scheme

In this subsection, we present two instances of our schemes using CRDs (not from affine planes) which outperform the MaN scheme in all aspects, namely in rate, gain as well as in subpacketization level simultaneously. The first instance is our scheme obtained using the CRD given in Example 3 and the second one is that obtained from the CRD given in Example 4. The performances of these two schemes in comparison with the comparable MaN schemes is presented in Table III and Table IV respectively. The performance improvement of our scheme for these two instances is shown pictorially in Fig. 5

Figure 5: Pictorial representation of Table III and Table IV
Parameters MaN Scheme Proposed Scheme
Number of Caches
Number of Caches a
user has access to
Number of Users
Subpacketization level
Fraction of each file
at each cache
Fraction of each file each
user has access to
Rate
Gain
Table III: Comparison between MaN and our scheme corresponding to Example 3
Parameters MaN Scheme Proposed Scheme
Number of Caches
Number of Caches a
user has access to
Number of Users
Subpacketization level
Fraction of each file
at each cache
Fraction of each file each
user has access to
Rate
Gain
Table IV: Comparison between MaN and our scheme corresponding to Example 4

Iv-C Comparison with SPE scheme

In the following subsection, we show some examples (not necessarily from affine planes) for the comparison between two schemes.

Example 7

Consider the resolvable design with parameters and specified as follows.

The parallel classes are

In Table V we compare our scheme with the SPE scheme keeping the number of caches and the number of caches a user has access to the same.

Parameters SPE Scheme Proposed Scheme
Number of Caches
Number of Caches a
user has access to
Number of Users
Subpacketization level
Fraction of each file
at each cache
Fraction of each file each
user has access to
Rate
Gain
Table V: Comparison between SPE and proposed scheme for parameters mentioned in Example 7

In Table VI we compare our scheme obtained from the CRD of Example 4 with the comparable SPE scheme.

Parameters SPE Scheme Proposed Scheme
Number of Caches
Number of Caches a
user has access to
Number of Users
Subpacketization level
Fraction of each file
at each cache
Fraction of each file each
user has access to
Rate
Per User Rate
Gain
Table VI: Comparison between SPE and proposed scheme.

The comparison between the SPE scheme and proposed scheme in Tables V and VI shows that more users can be supported in a multi-access setup, and certain choices of cross resolvable designs can yield better subpacketization levels and even better gains than the comparable SPE scheme at the cost of increased storage in each cache.

V Discussion

We have identified a special class of resolvable designs called cross resolvable designs which lead to multi-access coded caching schemes. While combinatorial designs have been used in the literature for coded caching problems ours is the first work to use them for multi-access coded caching. Our results indicate that using CRDs in multi-access setups can help attain gains beyond cache redundancy at low subpacketization levels while supporting a large number of users. Our scheme outperforms MaN scheme in terms of rate per user, gains and subpacketization simultaneously. It can perform better than SPE scheme in terms of users supported, rate per user and subpacketization levels which are important design parameters for any coded caching scheme. Our scheme also supports a wide range of choices of KM/N as opposed to SPE scheme. We have shown that the schemes presented in the paper using resolvable designs from affine planes perform better than the MaN scheme for large memory sizes using the metric per-user-rate. This is the only class of resolvable designs that we could identify which is cross resolvable. It will be interesting to construct or identify new cross resolvable designs and study the performance of the resulting multi-access coded caching schemes.

Acknowledgment

This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.

Proof of Correctness of the delivery algorithm

The proof of correctness of the delivery phase given by Algorithm 1 is provided by the sequence of the following three lemmas.

Lemma 5

Let be the set of indices of subfiles accessible to the user. It is easily seen that The set of subfiles which is available with every user in , other than is Consider the transmission corresponding to the set and a user as in Algorithm 1. Then the following equality holds.

(1)
Proof:

In Algorithm 1 consider the combination of blocks (caches) , where and the user which has access to blocks , where . The sequence of equations (2) to (10) in the next page constitute the proof for (1).