Due to the increasing number of multimedia applications like video on demand, tremendous growth in the consumption of data has been observed in recent years. The seminal work of  showed that jointly designing content placement and delivery, also known as coded caching significantly improves content delivery rate requirements. However in reality, due to practical constraints the subpacketization levels in  is not feasible due to its exponentially increasing nature, with respect to the increasing number of users. The quest for coded caching schemes with practical subpacketization levels started, and a wide variety of schemes using different constructions of placement delivery arrays, line graphs of bipartite graphs, linear block codes, block designs  etc, were found.
Parallelly, this technique of jointly designing placement and delivery were explored in other type of networks like device to device networks (D2D) , Combination networks, networks with shared cache , ,  etc. The case of networks with shared caches is particularly interesting since in these types of networks users can share the caches and hence in a practical perspective it helps in efficient utilization of memory. Coded caching in networks where demands are non uniform i.e., files with different popularities ,  has also been well studied. However we will be limiting ourselves to the cases where all the files are equipopular and demands are distinct.
An interesting and practical type of network scenario is a multi-access network where multiple users can access the same cache and multiple caches can be accessed by the same user ,,, . The scenario considered in , and  has users and caches and a “sliding window” approach, where user accesses caches for some , using a cyclic wraparound to preserve symmetry is used. Here, is called cache access degree i.e. the number of caches a user has access to.
In the following subsection we provide a brief survey of the schemes that are known in the literature of multi-access networks.
Known schemes in multi-access
I-1 Hachem-Karamchandani-Diggavi (HKD) Scheme 
Multi-access setups were introduced in the work of  where caching and delivery in a decentralized setting with multi-access and multi-level popularities under consideration.The authors give rate memory trade offs for a multi-level access model (content is divided into discrete levels based on popularity and users are required to connect to a certain number of access points based on the popularity of the file they have requested) with multi user setups (user can access multiple caches). In the case of a multi-user, multi-access model with a single-level caching system, with files, caches and users grouped into users in a group with access degree , such that and divides , and a cache memory of , in the decentralized assumption, considered in the paper, the achievable rate is given by
Also when does not divide , four times the above expression can be achieved and
Henceforth we refer to this scheme as the HKD scheme.
I-2 Reddy-Karamchandani (RK) Scheme 
The scheme proposed in  supports a multi-access setup with users and caches with each user connected to consecutive caches in a cyclic manner. The rate for this scheme for where is given by the expression
A generic lower bound on the optimal rate for any such multi-access setup with , under the restriction of uncoded placement is derived as
The rate achieved by this scheme is proved to be order optimal with the multiplicative gap between the achievable rate and lower bound, at most for The scheme is also found to be optimal for the special cases, , , with even and for some positive integer . Hereafter, we refer to this scheme as the RK scheme.
I-3 Serbetci-Parinello-Elia (SPE) Scheme
The work in  also deals with the same problem setup as in the previous cases and provides two new schemes which can serve, on average, more than users at a time and for the special case of , the achieved gain is proved to be optimal under uncoded cache placement where . The general scheme is proposed for the case . The subpacketization for this scheme is given by and it can be noted that the numerator should be divisible by and for the number of subpackets to be a positive integer greater than 0. We refer to this scheme as the SPE scheme.
I-4 Scheme using Cross Resolvable Designs (CRD) 
In  the authors develop a multi-access coded caching scheme from a specific type of resolvable designs called cross resolvable designs. The number of users supported in this scheme is higher than the other existing schemes for the same number of caches and for practically realizable subpacketization levels. To compare the performance with other schemes, the authors introduce the notion of per user rate or rate per user obtained by normalizing the rate R with the number of users K supported, i.e., rate per user is We refer to this scheme as the CRD scheme.
I-5 Cheng-Liang-Wan-Zhang-Caire (CLWZC) Scheme 
The work of  proposes a novel transformation approach to extend the Maddah Ali Niesen scheme to the multiaccess caching systems, such that the load expression of the scheme by  remains achievable even when does not divide . This work considers only the multi-access setup with cyclic wraparound where each user has access to neighboring cache-nodes with a cyclic wraparound topology. The rate expression for this scheme is same as that for the centralized HKD scheme. The subpacketization required is , where .
I-a Our Contributions
In this work a coded caching scheme for multi-access networks is proposed with the number of users much larger than the number of caches. The proposed scheme is extension of the MAN scheme and the MAN scheme is can be obtained as a special case. The performance of this scheme is compared with the existing works. To be able to compare the performance of different multi-access schemes with different number of users for the same number of caches, the terminology of per user rate introduced in  is used.
Notations: The set is denoted as denotes cardinality of the set .
Ii The Proposed Scheme
Our problem set up is as as shown in Fig. 1. Let denote the set of users in the network connected via an error free shared link to a server storing files denoted as each of unit size. Each user can access a unique set of caches (cache access degree) out of caches, each capable of storing files. denotes the content in cache , and we assume that each user has an unlimited capacity link to the caches that it is connected to.
Let denote the set of caches. Since a user has access to a unique subset of caches, every user can be uniquely represented with a sized subset of . From now on, we denote the user by the sized subset of caches it is connected to. Let , where denote the demand of the user connected to the set of caches represented by . Hence, the maximum number of users in this scheme is . Our scheme works for any number of users as long as a user is associated with a unique subset of caches, i.e., no two users are accessing the same set of caches. For concreteness we assume that The scheme works in two phases described below:
Let be an integer. The server divides each file into subfiles and in the cache places the content given by
It is seen that after the placement, the size of the content stored in the cache is equal to = = .
Note that a user may find the same subfiles of a file in more than one cache it has access to. Let denote the size of the files that a user has access to. We have
where are distinct caches. The above expression simplifies to
For each such subset of cardinality , the server transmits
Now we prove that with the above delivery scheme every user will be able to decode the file it wants to.
First, we note that when , then the user does not have access to any cache where, and . A user has access to a subfile indexed by iff . This is because, when , there exists a cache , through which can access the subfile indexed by .
Consider the delivery algorithm mentioned above. We now argue that each user can successfully recover its requested message. Consider the transmission by the server
corresponding to the subset of caches, with . Consider a user . The user already has access to the subfiles for any other user indexed by since, .
Hence it can retrieve the subfile from the transmission corresponding to . Likewise, from all such transmissions corresponding to , with , user gets the missing subfiles of
sent over the shared link. Since this is true for every such subset , any user can recover all missing subfiles.
For every integer the above placement and delivery results in a multi-access scheme with caches, access degree , users, subpacketization , coding gain , and rate
According to the proposed scheme, the size of each subpacket is . The total number of transmissions is the total number of choices of . The coding gain, defined as the total number of users benefited from each transmission is thus, the number of sized subsets of (number of users) we can choose from a particular . Thus rate and the coding gain in this scheme is
The proposed scheme can be designed for any integer , and the rate points in between can be achieved through memory sharing. So the proposed scheme exists for any multi-access network with files, caches equipped with memories of size file units each and access degree .
Note that in this setup, since every user is associated with a unique subset of caches, even if some users leave the network, the transmissions involving the existing users continue to hold. The only constraint is that, any user in the network should be uniquely associated with an subset of caches. So our setup is dynamic in the sense that users can join and leave the system whenever they want, provided the association with the caches is unique. is simply the maximum number of users associated with distinct subsets of caches that can be supported by the described the placement and delivery scheme.
In all the multi-access schemes in literature except the one derived from cross resolvable designs, the caches connected to any user, store disjoint contents. As a consequence, it is seen that in , and , the rate becomes when , since the user gets all the files from the caches connected to it. However, in these schemes, it is possible to enforce the constraint of making caches store disjoint contents since the number of users in the network is only . In the setup considered here, since the number of users is more than , the same subfiles can be stored in multiple caches connected to a user and hence rate does not become , when . Though this redundancy does appear like a waste of memory, since the number of users supported is large in the network, multi-casting opportunities increase resulting in low per user rates.
In the proposed scheme, the contents placed in the caches connected to any user in the setup are disjoint when , since the contents placed in the caches itself is disjoint in this special case.
It can be noted that , is the only case when contents connected to a user become disjoint unlike the existing schemes in literature where caches connected to a user always hold disjoint contents, with the exception of the CRD coded caching scheme.
In the examples below, we assume that the users are numbered by lexicographically ordered subsets. For example if , , then User 1 corresponds to the subset , User 2 to , User 3 to , User 4 to , User 5 to , and User 6 to
. The request vectordenotes the tuple containing demands of the users . The following three examples correspond to the cases and respectively.
Consider a multi-access setup with caches, and cache access degree . We allow all possible combinations of access to caches. So number of users For the subpacketization is The subfiles are , , , The cache placement is:
Let the request vector be . The transmissions are:
Consider a multi-access setup with caches, and cache access degree , . Number of users = = = and subpacketization is = = = . The subfiles are , , , , , , , , , The cache placement is:
Let the request vector be . The transmissions are:
Consider a multi-access setup with caches , and cache access degree , . Number of users = = = and subpacketization is = = = . The subfiles are , , , , , , , , , The cache placement is:
Let the request vector be . The transmissions are:
Iii Performance analysis
In this section we compare the performance of the proposed scheme with the schemes available in the literature for multi-access coded caching.
Iii-a Comparison with the HKD Scheme
In Fig. (a)a, the proposed scheme and the HKD scheme are compared in terms of per user rate with respect to , for , and for different values of . The expression for of centralized equivalent of the HKD scheme is given in (1). Note that should divide for the centralized HKD scheme to exist. The number of users in the HKD scheme. Fig. (b)b depicts variation of per user rate with respect to for the case when . From Fig. (a)a, it is seen that the proposed scheme achieves lower per user rate than the HKD scheme when . For the case, , in Fig. (b)b it is is seen that the rate of the HKD scheme becomes , while the per user rate of proposed scheme is slightly higher. However this is expected due to the cyclic wraparound topology considered in the HKD scheme and the placement in the caches such that the user gets all the files when .
For the centralized equivalent of the HKD scheme, is given by
In Fig. (a)a, the variation of subpacketization , with respect to is studied, keeping access degree constant for both schemes and the comparison is made for different values of . In Fig. (b)b, the variation of subpacketization , with respect to is studied, keeping constant for both schemes and the comparison is made for different values of . It is seen that the subpacketization levels of proposed scheme is significantly lower than that of the HKD scheme.
Iii-B Comparison with the SPE Scheme
In Fig. 4, the proposed scheme and the SPE scheme are studied in terms of per user rate with respect to , by keeping access degree, for both the schemes. The expression for rate of the SPE scheme is given in Theorem 1 of . For per user rate, it is normalized by number of users (For the SPE scheme ).
Since the SPE scheme exists only for , only these specific points are plotted for comparison. From Fig. 4, it is seen that proposed scheme supports lower per user rates.
The next comparison is in terms of subpacketization. In Fig. 5, variation of for the two schemes with respect to number of users is studied, for different values of . It is seen that apart from supporting large number of users, lower subpacketization levels are obtained from the proposed scheme when compared to the SPE scheme.
Iii-C Comparison with the RK Scheme
In this subsection, the proposed scheme is compared with the RK scheme scheme by varying and noticing its effect on per user rate. For the RK scheme the normalized lower bound of rate is plotted. The expression for is taken from Theorem 3 in . Since this lower bound is valid for , the comparison in Fig. 6 holds only for . Also in the RK scheme. The schemes are compared for different values of . The per user rate for the proposed scheme is found to be better than that of the RK scheme for the range from Fig. (a)a.
In Fig. (a)a, the two schemes are compared keeping M/N constant i.e. . The two schemes have been compared for different values of . In Fig. (b)b, the two schemes are compared by keeping the access degree same in both the schemes i.e. . The two schemes have been compared for different values of . From these plots it can be concluded that significantly low subpacketization levels can be attained with the proposed scheme when compared with the RK scheme.
Iii-D Comparison with the CRD Scheme
The scheme from  derived from the cross resolvable designs from affine planes is compared with proposed scheme in this section. The rate per user for the CRD scheme from affine planes is where is a prime or prime power. It can be seen from Fig. 8, that the rate per user supported, in the proposed scheme is significantly less compared to the scheme derived from CRD.
From , , , and for the multi-access scheme from CRDs derived from affine planes, where is a prime or prime power. In Fig. (a)a, the number of caches and memory fraction is kept same for both schemes, and the two schemes are compared in terms of subpacketization levels . Since and are dependent only on the parameter , for the CRD scheme from affine planes, the subpacketization values are plotted with respect to . In Fig. (b)b, the subpacketization levels are plotted with respect to number of users . From Fig. (a)a and Fig. (b)b, it can be concluded that the proposed scheme does not perform well in terms of subpacketization levels when compared with the CRD scheme. But this is explained by the fact that the CRD scheme loses out in rate and gains in terms of subpacketization. Also, the number of users supported in the proposed scheme is more than that of the CRD scheme.
Iii-E Comparison with the Cheng-Liang-Wan-Zhang-Caire (CLWZC) Scheme 
In this subsection, the proposed scheme is compared with the CLWZC  scheme scheme by varying and noticing its effect on per user rate. The expression of rate for  scheme is taken from Theorem 1 in . The schemes are compared for different values of . The per user rate for the proposed scheme is found to be better than that of the CLWZC scheme for the range from Fig. (a)a.
In Fig. (b)b, the two schemes are compared keeping constant i.e. . The two schemes have been compared for different values of . In Fig. (a)a, the two schemes are compared by keeping the access degree same in both the schemes i.e. . The two schemes have been compared for different values of . From these plots it can be seen that significantly low subpacketization levels can be attained with the proposed scheme when compared with the CLWZC scheme.
In Table I the proposed scheme is compared with the CLWZC scheme for . In can be observed that for and the access degree the proposed scheme achieves lower subpacketization level and supports larger number of users than the CLWZC scheme for the same fraction of each file at each cache , fraction of each file each user has access to and rate .
|Parameters||CLWZC Scheme||Proposed Scheme|
|Number of Caches|
|Number of Users|
|Rate (R)||= 1|
We conclude that the proposed scheme is better than most existing schemes in literature since the number of users () supported is very large at low subpacketization () levels. When , the rate per user achieved is lesser than that of existing schemes. The proposed scheme exists for any multi-access network with files, caches equipped with memories of size file units each and access degree . The scheme can be designed for any integer , and the rate points in between can be achieved through memory sharing, allowing convenience in designing.
This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.
-  M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
-  Q. Yan, M. Cheng, X. Tang and Q. Chen, “On the Placement Delivery Array Design for Centralized Coded Caching Scheme," in IEEE Transactions on Information Theory, vol. 63, no. 9, pp. 5821-5833, Sept. 2017.
-  P. Krishnan, “Coded Caching via Line Graphs of Bipartite Graphs," 2018 IEEE Information Theory Workshop (ITW), Guangzhou, 2018, pp. 1-5
-  L. Tang and A. Ramamoorthy, “Coded Caching Schemes With Reduced Subpacketization From Linear Block Codes," in IEEE Transactions on Information Theory, vol. 64, no. 4, pp. 3099-3120, April 2018.
-  S. Agrawal, K. V. Sushena Sree and P. Krishnan, “Coded Caching based on Combinatorial Designs," 2019 IEEE International Symposium on Information Theory (ISIT), Paris, France, 2019, pp. 1227-1231
-  J. Wang, M. Cheng, Q. Yan and X. Tang, “Placement Delivery Array Design for Coded Caching Scheme in D2D Networks," in IEEE Transactions on Communications, vol. 67, no. 5, pp. 3388-3395, May 2019.
-  Q. Yan, M. Wigger and S. Yang, “Placement Delivery Array Design for Combination Networks with Edge Caching," 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 1555-1559.
-  B. Asadi and L. Ong, "Centralized Caching with Shared Caches in Heterogeneous Cellular Networks," 2019 IEEE 20th International Workshop on Signal Processing Advances in Wireless Communications (SPAWC), Cannes, France, 2019, pp. 1-5.
-  E. Parrinello, A. Unsal, and P. Elia, “Coded caching with shared caches: Fundamental limits with uncoded prefetching," arXiv preprint arXiv:1809.09422, 2018.
-  A. M. Ibrahim, A. A. Zewail and A. Yener, “Coded Placement for Systems with Shared Caches," ICC 2019 - 2019 IEEE International Conference on Communications (ICC), Shanghai, China, 2019, pp. 1-6.
-  U. Niesen and M. A. Maddah-Ali, “Coded Caching With Nonuniform Demands," in IEEE Transactions on Information Theory, vol. 63, no. 2, pp. 1146-1158, Feb. 2017
-  Hachem, N. Karamchandani, and S. N. Diggavi, “Coded caching for multi-level popularity and access," in IEEE Transactions on Information Theory, vol. 63, no. 5, pp. 3108-141, 2017
-  K. S. Reddy and N. Karamchandani, “On the Exact Rate-Memory Trade-off for Multi-access Coded Caching with Uncoded Placement," 2018 International Conference on Signal Processing and Communications (SPCOM), Bangalore, India, 2018, pp. 1-5.
-  K. S. Reddy and N. Karamchandani, “Rate-memory trade-off for multi-access coded caching with uncoded placement,"in IEEE International Symposium on Information Theory (ISIT),2019
-  K. S. Reddy and N. Karamchandani, “Rate-Memory Trade-off for Multi-Access Coded Caching With Uncoded Placement," in IEEE Transactions on Communications, vol. 68, no. 6, pp. 3261-3274, June 2020
-  Berksan Serbetci, Emanuele Parrinello and Petros Elia, “Multi-access coded caching : gains beyond cache-redundancy,"IEEE Information Theory Workshop, Visby, Gotland, 2019
-  Digvijay Katyal, Pooja Nayak M and B Sundar Rajan,“Multi-access Coded Caching Schemes From Cross Resolvable Designs," Available on arXiv:2005.13731 [cs.IT]. Accepted for publication in IEEE Transactions on Communications.
-  Minquan Cheng, Dequan Liang, Kai Wan, Mingming Zhang, and Giuseppe Caire, “A Novel Transformation Approach of Shared-link Coded Caching Schemes for Multiaccess Networks,” Available on arXiv:2012.04483v2 [cs.IT]