Decentralized and Online Coded Caching with Shared Caches: Fundamental Limits with Uncoded Prefetching

01/23/2021
by   Elizabath Peter, et al.
0

Decentralized coded caching scheme, introduced by Maddah-Ali and Niesen, assumes that the caches are filled with no coordination. This work identifies a decentralized coded caching scheme – under the assumption of uncoded placement – for shared cache network, where each cache serves multiple users. Each user has access to only a single cache and the number of caches is less than or equal to the number of users. For this setting, we derive the optimal worst-case delivery time for any user-to-cache association profile where each such profile describes the number of users served by each cache. The optimality is shown using an index-coding based converse. Further, we improve the delivery scheme to accommodate redundant demands. Also, an optimal linear error correcting delivery scheme is proposed for the worst-case demand scenario. Next, we consider the Least Recently Sent (LRS) online coded caching scheme where the caches need to be updated based on the sequence of demands made by the users. Cache update happens if any of the demanded file was not partially cached at the users. The update is done by replacing the least recently sent file with the new file. But, the least recently sent file need not be unique. In that case, there needs to be some ordering of the files which are getting partially cached, or else centralized coordination would have to be assumed which does not exist. If each user removes any of the least recently used files at random, then the next delivery phase will not serve the purpose. A modification is suggested for the scheme by incorporating an ordering of files. Moreover, all the above results with shared caches are extended to the online setting.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset