Cache-Aided Modulation for Heterogeneous Coded Caching over a Gaussian Broadcast Channel

Coded caching is an information theoretic scheme to reduce high peak hours traffic by partially prefetching files in the users local storage during low peak hours. This paper considers heterogeneous decentralized caching systems where cache of users and content library files may have distinct sizes. The server communicates with the users through a Gaussian broadcast channel. The main contribution of this paper is a novel modulation strategy to map the multicast messages generated in the coded caching delivery phase to the symbols of a signal constellation, such that users can leverage their cached content to demodulate the desired symbols with higher reliability. For the sake of simplicity, in this paper we focus only on uncoded modulation and symbol-by-symbol error probability. However, our scheme in conjunction with multilevel coded modulation can be extended to channel coding over a larger block lengths.



There are no comments yet.


page 1

page 2

page 3

page 4


Cache-aided General Linear Function Retrieval

Coded Caching, proposed by Maddah-Ali and Niesen (MAN), has the potentia...

Optimal Error Correcting Delivery Scheme for Coded Caching with Symmetric Batch Prefetching

Coded caching is used to reduce network congestion during peak hours. A ...

An Index Coding Approach to Caching with Uncoded Cache Placement

Caching is an efficient way to reduce network traffic congestion during ...

State-Adaptive Coded Caching for Symmetric Broadcast Channels

Coded-caching delivery is considered over a symmetric noisy broadcast ch...

Coded Caching based on Combinatorial Designs

We consider the standard broadcast setup with a single server broadcasti...

Optimal Error Correcting Delivery Scheme for an Optimal Coded Caching Scheme with Small Buffers

Optimal delivery scheme for coded caching problems with small buffer siz...

On the Fundamental Limits of Coded Caching Systems with Restricted Demand Types

Caching is a technique to reduce the communication load in peak hours by...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Coded caching, originally proposed by Maddah-Ali and Niesen (MAN) in their seminal work [maddah2014fundamental], leads to an additional coded multicast gain compared to the conventional uncoded caching. In the MAN model, a server has access to a library of files and is connected to users through an error-free shared link of unit capacity. Each user is equipped with a cache of size equivalent to files. The MAN coded caching scheme consists of two phases: placement and delivery. During the placement phase, users partially store files from the library in their cache memories. Of course, placement is agnostic of the future user demands. After the user demands are revealed, the server broadcasts a sequence of multicast messages to the users. Such messages are computed as a function of the user demands, of the library files, and of the user cache content, such that after receiving the multicast messages all users obtain their requested file with zero error probability (or vanishing error probability in the limit of large file size). The placement can be done in a several manners distinguished by two characteristics, coded vs. uncoded and centralized vs. decentralized. Uncoded placement refers to the fact that segments of the library files are stored directly in the caches, and not more general functions thereof. It is known that for the MAN shared link scenario uncoded placement is optimal within a factor of 2 [exactrateuncoded]. Therefore, in this paper we consider uniquely uncoded placement. A coded caching system is called centralized [maddah2014fundamental] if the server assigns the files segments to the users as a function of the number of users in the system. In contrast, in the decentralized case [decMAN], each user individually and independently of the others fills up its cache with bits from the library files without knowing how other users are in the system and which segment have been already cached by the other users. A vast class of decentralized caching placements consists of random independent caching, where the set of library bits cached by each user

be a random variable

according to some distribution independent of the number of users , and let the be statistically independent.

In a practice, it may be more realistic to consider the case where users and files have distinct sizes (heterogeneous caching systems). In [zeropadding]

, the authors proposed a decentralized coded caching scheme with varying cache sizes by applying zero-padding to subfiles of different length to enable their encoding in a joint multicast message. Coded caching with distinct file sizes with uncoded placement was originally investigated in

[zhangfirst] where the users could request a file multiple times. They proposed a caching scheme for different file sizes by considering random independent caching where the bits of each file are cached independently at random with a probability proportional to the file size. Further improvements on heterogeneous caching could be found in [zhangclosinggap, yenerd2d, OptimalDecFile]. A common point of the existing heterogeneous caching schemes is that the delivery phases are based on clique-covering method, which is a direct extension of the MAN delivery. 111Each transmitted multicast message in the delivery phase is a binary sum of a set of subfiles and useful to a subset of users, where each corresponding user knows all subfiles in the sum except one such that it can decode the remaining subfile.

In this paper, we consider the implementation of a heterogeneous decentralized coded caching system over a Gaussian broadcast channel, which is a more realistic model for the actual communication physical layer than of the error-free capacitated shared link. Our main focus is to map the coded packets generated in the caching delivery phase to a signal constellation. 222This modulation with side information strategy was originally proposed in [latticeindexcoding] for index coding, where the authors [latticeindexcoding] considered how to modulate the index codes. The relationship between index coding and coded caching was discussed in [maddah2014fundamental, ontheoptimality], and the main difference is that the stored content of each user is fixed in the index coding problem while the cache of each user can be designed in the caching problem (such that the “worst” cache configurations can be avoided.) In heterogeneous caching systems, the subfiles in each multicast message generated by a clique-covering method may have distinct sizes, i.e., there is some inherent redundancy in each multicast message. The idea is to leverage this redundancy in the modulation/demodulation step, such that the average symbol error rate of users can be reduced. Besides introducing the novel caching-modulation problem, our main contributions are

  • We propose a novel modulation/demodulation strategy, where users can leverage their cached content to demodulate.

  • We use that the set partitioning labelling proposed in [multilevel, ungerboeckpart1, ungerboeckpart2, Forney] is optimal (i.e., where the minimum distance is maximized) in our modulation with side information strategy.

  • We prove that the proposed cache-aided modulation scheme outperforms the conventional modulation scheme with zero padding.

Ii System model and problem setting

Ii-a System model

We consider a content delivery system with a server having access to a library of independent files with distinct sizes. For each , File has bits where is the total library size in bits, and . The server (e.g., a wireless base station) transmits a signal to the users which receive , where is the Additive White Gaussian Noise (AWGN) at the -th receiver, with unit power, is also normalized to have unit power, and

denotes the receiver Signal-to-Noise Ratio (SNR). Without loss of generality we shall adopt the standard complex baseband discrete-time model and since we focus on symbol-by-symbol demodulation we can omit the discrete time index and simply write

for a generic symbol at user receiver, use and to denote the whole transmit and receive sequences over many symbols. Each user has a cache memory with size bits where . We defined normalized cache sizes as . The users have different cache sizes, without loss of generality, .

The caching system comprises a placement and a delivery phase. In the placement phase users store contents from the library in a decentralized manner without any knowledge about demands. We define as the caching function for user , which maps the library to the cache content for user with the content of all caches being denoted by . In the delivery phase, each user requests one file from the library. We denote the file demanded by user as and demands of all users by . Given , the server sends the codewords , where is a -dimensional signal constellation (i.e., a discrete set of points in ), and is the broadcast codeword length in terms of constellation symbols.333In this paper, for the sake of simplicity, we assume the modulation is uncoded, i.e., we let and consider classical QAM/PSK signal constellations. Upon receiving , user needs to decode from and .

Given the cache sizes of the users and the file sizes we design a shared link caching scheme to fill the users’ caches in the placement phase and to generate broadcast messages of total size bits in the delivery phase.