An Optimal Linear Error Correcting Delivery Scheme for Coded Caching with Shared Caches

Classical coded caching setting avails each user to have one dedicated cache. This is generalized to a more general shared cache scheme and the exact expression for the worst case rate was derived in [E. Parrinello, A. Unsal, P. Elia, "Fundamental Limits of Caching in Heterogeneous Networks with Uncoded Prefetching," available on arXiv:1811.06247 [cs.IT], Nov. 2018]. For this case, an optimal linear error correcting delivery scheme is proposed and an expression for the peak rate is established for the same. Furthermore, a new delivery scheme is proposed, which gives an improved rate for the case when the demands are not distinct.

READ FULL TEXT VIEW PDF

Authors

page 1

page 2

page 3

page 4

01/06/2018

Optimal Error Correcting Delivery Scheme for an Optimal Coded Caching Scheme with Small Buffers

Optimal delivery scheme for coded caching problems with small buffer siz...
12/21/2017

Optimal Error Correcting Delivery Scheme for Coded Caching with Symmetric Batch Prefetching

Coded caching is used to reduce network congestion during peak hours. A ...
03/05/2019

On the Optimality of Ali-Niesen Decentralized Coded Caching Scheme With and Without Error Correction

The decentralized coded caching was introduced in [M. A. Maddah-Ali and ...
01/23/2021

Decentralized and Online Coded Caching with Shared Caches: Fundamental Limits with Uncoded Prefetching

Decentralized coded caching scheme, introduced by Maddah-Ali and Niesen,...
01/31/2018

A Novel Centralized Strategy for Coded Caching with Non-uniform Demands

Despite significant progress in the caching literature concerning the wo...
07/15/2018

Caching at the Edge with Fountain Codes

We address the use of linear randon fountain codes caching schemes in a ...
05/12/2020

Characterizing Linear Memory-Rate Tradeoff of Coded Caching: The (N,K)=(3,3) Case

We consider the cache problem introduced by Maddah-ali and Niesen [1] fo...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

The technique of coded caching introduced in [1] helps in reducing the peak traffic experienced by networks. This is achieved by making a part of the content locally available at the users during non-peak periods. In [1], it is shown that apart from the local caching gain obtained by placing contents at user caches before the demands are revealed, a global caching gain can be obtained by coded transmissions. The scheme in [1] is a centralized coded caching scheme, where all users are linked to a single fixed server. Since then there have been many extensions to this, like decentralized scheme [2], non-uniform demands [3] and online coded caching [4].

A coded caching scheme involves two phases: a placement phase and a delivery phase. In the placement phase or prefetching phase, each user can fill their local cache memory using the entire database. During this phase there is no bandwidth constraint as the network is not congested and the only constraint here is the memory. Delivery phase is carried out once the users reveal their demands. During the delivery phase only the server has access to the file database and the constraint here is the bandwidth as the network is congested in this phase. During placement phase some parts of files have to be judiciously cached at each user in such a way that the rate of transmission is reduced during the delivery phase. The prefetching can be done with or without coding. If during prefetching, no coding of parts of files is done, the prefetching scheme is referred to as uncoded prefetching [1, 5]. If coding is done during prefetching stage, then the prefetching scheme is referred to as coded prefetching [6, 7, 8, 9].

An extension of the coded caching problem involving heterogeneous networks is considered in [10], where multiple users share a common cache. Each user has access to a helper cache, which is potentially accessed by multiple users. The scheme introduced in [10] is referred to as Shared Cache (SC) scheme throughout the paper. The corresponding prefetching scheme and delivery scheme are referred to as the SC prefetching scheme and SC delivery scheme respectively. In addition to the cache placement and delivery phase, there is an additional intermediate step which is the user-to-cache association phase. The expression for rate in this scenario under the assumption of uncoded placement is derived in [10]. The rate expression was under the assumption of worst case demand, which means that all the files are demanded. In our work, a new delivery scheme is proposed for the non-distinct demand case which provides improved rate compared to the SC scheme (Section V).

Error correcting coded caching scheme was introduced in [11, 12]. In this set up, the delivery phase is assumed to be error-prone and placement is assumed to be error-free. A similar model in which the delivery phase takes place over a packet erasure broadcast channel was considered in [13]. In this work, shared cache systems in which the delivery phase is error prone is considered. An error correcting delivery scheme has to be designed to correct the required transmission errors. Each user has to decode their demands even in the presence of these errors. In our work, an optimal error correcting delivery scheme is proposed for the worst case demand in the shared cache system.

The main contributions of this paper are as follows:

  • An optimal linear error correcting delivery scheme for coded caching problems with SC prefetching is proposed using techniques from index coding (Section III and Section IV).

  • For error correcting delivery scheme for coded caching problems with SC prefetching, a closed form expression for peak rate is established (Section IV).

  • A new delivery scheme for SC prefetching for all the demand cases having an improved rate compared to the scheme in [10] is proposed (Section V).

In this paper denotes the finite field with elements, where is a power of a prime, and denotes the set of all non-zero elements of . For any integer , let denote the set For a matrix , denotes its th row. Also, and if . The lower convex envelope of points for some natural number is denoted by .

A linear code over is a -dimensional subspace of with minimum Hamming distance . A matrix of size whose rows are linearly independent codewords of is called a generator matrix of . A linear code can thus be represented using its generator matrix as, Let denote the length of the shortest linear code over which has dimension and minimum distance .

Ii Preliminaries and Background

To obtain the main results of this paper, we use results from error correcting index coding problems [14]. In this section we recall some results from this and also review the concepts of error correcting coded caching scheme [11]. Furthermore, we review the SC placement and delivery scheme [10].

Ii-a Index Coding Problem

The index coding problem with side information was introduced in [15]. A single source has messages where There are receivers, . Each receiver possesses a subset of messages as side information. Let denote the set of indices of the messages belonging to the side information of receiver . The map assigns receivers to indices of messages demanded by them. Receiver demands the message , [14]. The source knows the side information available to each receiver and has to satisfy the demand of each receiver in minimum number of transmissions. An instance of index coding problem can be completely characterized by a side information hypergraph [16]. Given an instance of the index coding problem, finding the best scalar linear binary index code is equivalent to finding the min-rank of the side information hypergraph [14], which is known to be an NP-hard problem in general [17, 18, 19].

An index coding problem with receivers and messages can be represented by a hypergraph , where is the set of vertices and is the set of hyperedges [16]. Vertex represents the message and each hyperedge represents a receiver. In [14], the min-rank of a hypergraph over is defined as,

where denotes that is the subset of the support of

; the support of a vector

is defined to be the set . This min-rank defined above is the smallest length of scalar linear index code for the problem. A linear index code of length can be expressed as , where is an matrix and . The matrix is said to be the matrix corresponding to the index code.

Let = be an undirected graph, then a subset of vertices is called an independent set if , . The size of a largest independent set in the graph is called the independence number of . Dau et al. in [14] extended the notion of independence number to the case of directed hypergraph corresponding to an index coding problem. For each receiver , define the sets

and

A subset of is called a generalized independent set in , if every nonempty subset of belongs to . The size of the largest independent set in is called the generalized independence number and is denoted by . It is proved in [11] that for any index coding problem,

(1)

The quantities and decide the bounds on the optimal length of error correcting index codes. The error correcting index coding problem with side information was defined in [14]. An index code is said to correct errors if after receiving at most transmissions in error, each receiver is able to decode its demand. A -error correcting index code is represented as -ECIC. An optimal linear -ECIC over is a linear -ECIC over of the smallest possible length . Lower and upper bounds on were established in [14]. The Lower bound is known as the -bound and the upper bound is known as the -bound. The length of an optimal linear -ECIC over satisfies

(2)

The -bound is achieved by concatenating an optimal linear classical error correcting code and an optimal linear index code. Thus for any index coding problem, if is same as , then concatenation scheme would give optimal error correcting index codes [20, 21, 22, 23].

Ii-B Error Correcting Coded Caching Scheme

Error correcting coded caching scheme was proposed in [11]. The server is connected to users through a shared link which is error prone. The server has access to files , each of size bits. Every user has an isolated cache with memory bits, where . A prefetching scheme is denoted by . During the delivery phase, only the server has access to the database. Every user demands one of the files. The demand vector is denoted by , where is the index of the file demanded by user . The number of distinct files requested in is denoted by . During the delivery phase, the server informed of the demand , transmits a function of , over a shared link. Using the cache contents and the transmitted data, each user needs to reconstruct the requested file even if transmissions are in error.

For the -error correcting coded caching problem, a communication rate is achievable for demand if and only if there exists a transmission of bits such that every user is able to recover its desired file even after at most transmissions are in error. Rate is the minimum achievable rate for a given , and . The average rate is defined as the expected minimum average rate given and under uniformly random demand. Thus

The average rate depends on the prefetching scheme . The minimum average rate is the minimum rate of the delivery scheme over all possible . The rate-memory trade-off for average rate is finding the minimum average rate for different memory constraints . Another quantity of interest is the peak rate, denoted by , which is defined as The minimum peak rate is defined as

Ii-C Shared Cache Scheme

The coded caching system with shared cache [10] is described as follows. There are files, users and caches, with normalized memory of each cache being . Parameter is defined to be . Each cache , is assigned to a set of users , and all these disjoint sets,

form a partition of the set of users describing the overall association of the users to the caches. For any given , we consider the association profile

where is the number of users assigned to the th most populated helper node/cache.

1) SC Prefetching Phase: Each file is split into disjoint subfiles , for each , , and then each cache stores a fraction of each file. For instance, the th cache stores subfiles in the set This prefetching scheme is denoted by .

2) SC Delivery Phase: Without loss of generality assume (any other case can be handled by simple relabeling of the caches) and . With a slight abuse of notation, each denotes an ordered vector describing the users associated to cache . Delivery phase consists of rounds, where each round serves users

where is the th user in the set For each round , the sets of size are considered and for each set . The set of receiving users are

If , the server transmits,

If , there is no transmission. The decoding is possible for each user using these transmissions [10]. The optimal worst case rate for the SC scheme is obtained in [10] as

at points

For a fixed prefetching and for a fixed demand , the delivery phase of a coded caching problem is an index coding problem [1]. In fact, for fixed prefetching, a coded caching scheme consists of parallel index coding problems one for each of the possible user demands. Thus finding the minimum achievable rate for a given demand is equivalent to finding the min-rank of the equivalent index coding problem induced by the demand .

Consider the SC prefetching scheme . The index coding problem induced by the demand for SC prefetching is denoted by Each subfile corresponds to a message in the index coding problem. The corresponding generalized independence number and min-rank are represented as and respectively.

Iii Generalized Independence Number for

In this section we find a closed form expression for generalized independence number of the index coding problem for the case when all the files are demanded. We denote the worst case demand vector as Hence our aim is to find an expression for . In each subfile corresponds to a message. The side information sets of all the receivers in the index coding problem is completely decided by the placement scheme in [10]. We assume a unicast index coding problem for convenience (if there is a receiver demanding multiple messages, we split that receiver into multiple receivers each demanding one file). Hence there are messages and receivers in . From the delivery scheme and the expression for rate in [10], we get an upper bound for as

(3)

In the proof of the theorem below, we give a technique to find a generalized independent set for by intelligently picking messages to the set. Using this we get a lower bound for the generalized independence number, From this we conclude that

Theorem 1

For the index coding problems for the case when all the files are demanded, we have

Proof:

We construct a set whose elements are messages of such that the set of indices of the messages in forms a generalized independent set. The set is constructed as

where represents the cache to which the user demanding the file is associated with. For instance, if is connected to the th cache, then . Let be the set of indices of the messages in . The claim is that is a generalized independent set. Each message in is demanded by one receiver. Hence all the subsets of of size one are present in . Consider any set where . Consider the message . The receiver demanding this message does not have any other message in as side information. Thus indices of messages in lie in . Thus any subset of lies in . Since is a generalized independent set, we have, . Note that .

Number of messages of the form which are present in is . Hence, of the files demanded by the users which are associated to the th cache, the number of subfiles or equivalently messages which are picked to the set is . Since is defined to be zero if , the limits of summation only needs to be taken from to . Thus

Hence, Hence from (1) and (3), the statement of the theorem follows.

Example 1

Consider a scenario with , and . In the placement phase, each file is first split into equally-sized subfiles111For simplicity we use instead of .: and then each cache stores . For example, cache 1 stores subfiles . In the cache assignment, users and are assigned to caches and respectively, so that the association profile is . Without loss of generality we assume that the demand vector .

We consider the index coding problem . Each of the subfiles correspond to a message in the index coding problem. Hence for this example, the corresponding will have 48 messages and 48 receivers (each user demanding more than one message is split into multiple receivers demanding one message each). We construct a set , whose elements are messages of such that the set of indices of the messages in forms a generalized independent set. The set for this case can be constructed as

Hence From the transmission scheme in [10], there are 11 transmissions which satisfy the demands of all the users. Hence Thus from (1) we have for this case,

Iv Optimal Error Correcting Delivery Scheme for SC Prefetching for Worst Case Demand

For the worst case demand, we have proved in Theorem 1 that . Hence for this case, the optimal linear error correcting delivery scheme can be constructed by concatenating the worst case delivery scheme in [10] with an optimal error correcting code which corrects the required number of errors. Based on this we give an expression for the worst case rate for SC prefetching in the theorem below.

Theorem 2

For a shared cache system with SC prefetching scheme, we have

at points

Proof:

From Theorem 1, we get that for Thus from (2), the and bounds become equal for such index coding problems. The optimal length or equivalently the optimal number of transmissions required for error corrections in those index coding problems is thus and hence the statement of the theorem follows.

Since and bounds meet for the optimal linear error correcting delivery scheme here would be concatenation of SC delivery scheme with an optimal classical error correcting delivery scheme which corrects errors. Decoding can be done by syndrome decoding for error correcting index codes proposed in [14]. We give an example for which we construct optimal error correcting delivery scheme for coded caching problems with SC prefetching.

Example 2

Consider the coded caching problem with shared caches which we considered in Example 1. For this we know that the and bounds meet and hence the concatenation scheme is optimal. For this case, the SC delivery scheme is as follows. There are 3 rounds with each round serving the following sets of users: In the first round, the server transmits the following symbols, and . In the second round the transmissions are: and . The transmissions in the third round are: and . If we need to correct error, we need to concatenate SC transmission scheme with a classical error correcting code with optimal length. From [24], we have Hence the optimal concatenation can be done with a code.

V Improvement on SC Scheme for Non-distinct Demands

In this section, we consider the case when the demands are non-distinct. We give a delivery scheme which clearly has an advantage over the scheme in [10]. We give an expression for the achievable rate for any demand vector which meets the expression for achievable rate in the case of [10] for the worst case demand. Before formally describing the proposed delivery scheme, we demonstrate the main ideas of the scheme through a motivating example.

V-a Motivating Example

Consider the same system which we explained in Example 1. The placement scheme and user assignments are the same as in Example 1. We assume here that the demand vector . Thus, here . Before the delivery scheme starts, we eliminate some demands which are redundant. If multiple users which are connected to the same cache demand the same file, the delivery scheme need to satisfy the demand of one of them and the others also get what they want. Hence we can eliminate the repeated demand among the users which are connected to the same cache. Thus in the example, we can modify the association profile as and . After this, the delivery scheme is done in rounds as in [10], but with a modification. Delivery takes place in 3 rounds, with each round respectively serving the following sets of users: and . In the first round, the server transmits

Here the decoding is done as in [5]. For instance, user 1, upon receiving , can decode using the helper cache contents and . Similarly using other transmissions, other subfiles can be decoded. In the second round, we have the following set of transmissions:

In the last round the server serves user 3 with three more transmissions given by:

Hence there are a total of 9 transmissions, which means that the rate achieved is . This is a smaller rate compared to the rate achieved by the scheme in [10].

V-B General Delivery Phase

We follow the assumptions and most of the notations as in [10] to describe the scheme. Let the demand vector be and let the number of distinct files requested be . We use the notation for the number of distinct files demanded by the users in . We need to consider only users which request distinct files and satisfy their demand. This is because, any other user in can get its requested file from the transmissions. Hence before the delivery starts, we eliminate the users with repeated demand from each . After eliminating such users, let the modified association profile be . The remaining users associated to cache is denoted by . Moreover, let . Without loss of generality, we assume that Delivery phase consists of rounds, where each round serves users

where is the th user in the set Let the number of distinct files in be For each round , the server selects a subset of users, denoted by that requests different files. These users are considered as leaders. For each round , we create sets of size , and for each set which satisfy we pick the set of receiving users as

If , the server transmits,

If , there is no transmission. Since this transmission scheme uses scheme in [5] for each round, the decoding at each receiver is ensured. The theorem below gives an expression for rate in this scheme.

Theorem 3

For coded caching problems with SC prefetching scheme,

at points

Proof:

Since, , there can be a total of sets of users . Furthermore, we see that there are such sets that are empty. Moreover, since the transmissions are done only for such sets which satisfy we see that each round consists of

transmissions. Since each file is split into subfiles, the statement of the theorem follows.

V-C Generalized Independence Number

In this subsection, we find a bound for the generalized independence number of the index coding problems , which covers even the case of non-distinct demands. From the rate expression in Theorem 3, we have the upper bound for given by

(4)

The theorem below gives a lower bound for .

Theorem 4

For the index coding problems ,

(5)
Proof:

The modified association profile is . Hence the theorem follows from Theorem 1.

Since the expressions in (4) and (5) are different, the equality of and cannot be guaranteed in general. There are cases when these become equal. In that case, an optimal error correcting delivery scheme is obtained by concatenation of the delivery scheme proposed in Section V-B and an optimal error correcting code. This is illustrated in detail in the following example.

Example 3

Consider a shared cache system with , . Hence the parameter . Consider a uniform association profile Each file is divided into subfiles and . We know that if all the files are demanded, the number of transmissions required by SC delivery scheme is from (3). We assume that only files are demanded and let the demand vector be . We use the delivery scheme proposed in Section V-B as follows. We need to remove the repeated demand from each of the caches. Thus the modified association profile will be The corresponding modified demand vector will be . There will be rounds of transmissions. The first round serves the users in and the second round serves the users in The transmissions in the first round are

The transmissions in the second round are given as

Hence there are transmissions. Hence for the index coding problem , we have

For finding a lower bound for , we construct the set as in the proof of Theorem 1. We obtain the set as follows:

From this, we get Thus for this case, and bounds meet. Hence for this case, the optimal linear error correcting delivery scheme is to concatenate the improved scheme in Section V-B with an optimal linear error correcting code. For instance, suppose that we want to correct transmission error. From [24], we get Hence the concatenation can be done with a linear code to obtain an optimal linear error correcting delivery scheme.

Assume now that only 5 files are demanded and the demand vector is . Here since there is no repeated demand within a cache, there is no user to be eliminated. The transmission is carried out in 3 rounds. The users served in the three rounds are given by

The transmissions in each round is done according to the improved scheme. The transmissions in first round are:

The transmissions in the second round are:

The transmissions in the third round are:

Hence there are 9 transmissions. Thus for the index coding problem , we have

The set is constructed for this case as

From this, we get that Thus, for this case we cannot conclude that and bounds meet. Hence the concatenation scheme may not be optimal.

Vi Conclusion

We considered the SC scheme and for worst case demand, we proved that for all the corresponding index coding problems, the and bounds meet. This makes the concatenation of SC delivery scheme with an optimal classical error correcting code which corrects the required number of errors to be optimal. Moreover, for the case of non-distinct demands, we proposed an improved scheme which has clear advantage over the scheme in [10].

Acknowledgment

This work was supported partly by the Science and Engineering Research Board (SERB) of Department of Science and Technology (DST), Government of India, through J.C. Bose National Fellowship to B. Sundar Rajan.

References

  • [1] M. A. Maddah-Ali and U. Niesen, “Fundamental limits of caching,” IEEE Trans. Inf. Theory, vol. 60, no. 5, pp. 2856–2867, May 2014.
  • [2] M. A. Maddah-Ali and U. Niesen, “Decentralized coded caching attains order-optimal memory-rate tradeoff,” IEEE/ACM Trans. Networking, vol. 23, no. 4, pp. 1029–1040, Aug. 2015.
  • [3] U. Niesen and M. A. Maddah-Ali, “Coded caching with nonuniform demands,” IEEE Trans. Inf. Theory, vol. 63, no. 2, pp. 1146–1158, Feb. 2017.
  • [4] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, “Online coded caching,” IEEE/ACM Trans. Networking, vol. 24, no. 2, pp. 836–845, Apr. 2016.
  • [5] Q. Yu, M. A. Maddah-Ali, and A. S. Avestimehr, “The exact rate-memory tradeoff for caching with uncoded prefetching," in Proc. IEEE Int. Symp. Inf. Theory (ISIT), Aachen, Germany, Jun. 2017, pp. 1613–1617.
  • [6] Z. Chen, P. Fan, and K. B. Letaief, “Fundamental limits of caching: improved bounds for users with small buffers," IET Communications, vol. 10, no. 17, pp. 2315-2318, Nov. 2016.
  • [7] J. Gómez-Vilardebó, “Fundamental Limits of Caching: Improved Rate-Memory Tradeoff With Coded Prefetching," in IEEE Transactions on Communications, vol. 66, no. 10, pp. 4488-4497, Oct. 2018.
  • [8] C. Tian and J. Chen, “Caching and delivery via interference elimination," IEEE Trans. on Information Theory, vol. 64, no. 3, pp. 1548-1560, 2018.
  • [9] K. Zhang and C. Tian, "From Uncoded Prefetching to Coded Prefetching in Coded Caching Systems," 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 2087-2091.
  • [10] E. Parrinello, A. Unsal, P. Elia, “ Fundamental Limits of Caching in Heterogeneous Networks with Uncoded Prefetching," Available on arXiv:1811.06247 [cs.IT], Nov. 2018.
  • [11] N. S. Karat, A. Thomas and B. S. Rajan, “Optimal Error Correcting Delivery Scheme for Coded Caching with Symmetric Batch Prefetching," 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 2092-2096.
  • [12] N. S. Karat, A. Thomas and B. S. Rajan,“Optimal Error Correcting Delivery Scheme for an Optimal Coded Caching Scheme with Small Buffers," 2018 IEEE International Symposium on Information Theory (ISIT), Vail, CO, 2018, pp. 1710-1714.
  • [13] S. S. Bidokhti, M. A. Wigger and R. Timo, “Noisy broadcast networks with receiver caching," arXiv:1605.02317v1 [cs.IT], May 2016.
  • [14] S. H. Dau, V. Skachek, and Y. M. Chee, “Error correction for index coding with side information,” IEEE Trans. Inf. Theory, vol. 59, no. 3, pp. 1517–1531, Mar. 2013.
  • [15] Y. Birk and T. Kol, “Coding-on-demand by an informed source (ISCOD) for efficient broadcast of different supplemental data to caching clients," IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2825-2830, Jun. 2006.
  • [16] N. Alon, A. Hassidim, E. Lubetzky, U. Stav, and A. Weinstein, “Broadcasting with side information," in Proc. 49th Annu. IEEE Symp. Found. Comput. Sci., Oct. 2008, pp. 823-832.
  • [17] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, “Index coding with side information," in Proc. 47th Annu. IEEE Symp. Found. Comput. Sci. (FOCS), Oct. 2006, pp. 197-206.
  • [18] R. Peeters, “Orthogonal representations over finite fields and the chromatic number of graphs,” Combinatorica, vol. 16, no. 3, pp. 417–431, 1996.
  • [19] S. H. Dau, V. Skachek and Y. M. Chee, “Optimal Index Codes With Near-Extreme Rates," in IEEE Transactions on Information Theory, vol. 60, no. 3, pp. 1515-1527, March 2014.
  • [20] S. Samuel and B. S. Rajan, “Optimal linear error-correcting index codes for single-prior index-coding with side information,” in Proc. 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, Mar. 2017, pp. 1–6.
  • [21] N. S. Karat and B. S. Rajan, “Optimal linear error correcting index codes for some index coding problems,” in Proc. 2017 IEEE Wireless Communications and Networking Conference (WCNC), San Francisco, CA, USA, Mar. 2017, pp. 1–6.
  • [22] S. Samuel, N. S. Karat, and B. S. Rajan, “Optimal linear error correcting index codes for some generalized index-coding problems," in Proc. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC), Montreal, Canada, Oct. 2017.
  • [23] N. S. Karat, S. Samuel, and B. S. Rajan, “Optimal error correcting index codes for some generalized index coding problems," in IEEE Transactions on Communications early access: doi: 10.1109/TCOMM.2018.2878566.
  • [24] M. Grassl, “Bounds on the minimum distance of linear codes and quantum codes,” Online available at http://www.codetables.de, 2007.