I Introduction
In recent times, many service providers allow users to access and store data remotely to avoid overwhelming the limited storage capacity of single users. This leads naturally to the design of large distributed storage systems that reliably store data while minimizing the redundancy necessary to deal with server failures.
The use of erasurecorrecting codes together with network coding techniques for distributed storage systems, initiated in [1], has become popular since these socalled regenerating codes achieve the optimal tradeoff between the required repair bandwidth and storage overhead. For a standard erasure code of length , dimension and minimum distance , any failures can be repaired by contacting at most other nodes. In addition to this property, and at the cost of the failure tolerance, regenerating codes also enable efficient repair of failed nodes. This was long thought to be in contrast to the traditional maximum distance separable (MDS) codes that have to reconstruct the whole file in order to repair a single node. However, [2, 3] showed that this claim is not true in general, namely some MDS codes can also be efficiently repaired. Nevertheless, the number of nodes contacted for repair is a bottleneck for the system efficiency. To reduce the repair network traffic, [4] and later [5] introduced the notion of locality allowing the repair of a single failure to be done by contacting only nodes with . Erasure codes satisfying this requirement are called locally repairable (or recoverable) codes.
A natural extension was presented in [6], [7] where the authors defined the locality for the information symbols to allow failures to be still corrected locally. This requirement can be extended to all symbols without differentiating between the information symbols and the parity symbols. In this paper, we will focus only on allsymbol locality and therefore drop the specification. Other extensions of the locality property include codes with availability [8], sequential repair of several erasures [9], cooperative repair [10], local repair on graphs [11] and many others.
Abundant literature has been devoted to understanding the best possible parameters of LRCs and provide optimal constructions. The authors of [4] gave the first tradeoff between the parameters , and by showing that the minimum distance of an LRC code with locality is bounded as follows:
(1) 
This bound was extended in [6] for any LRC code with locality :
The two bounds have been proven to be tight for large alphabet size with constructions provided in [5, 6, 7, 12, 15, 16, 17, 18, 19, 20, 21, 22]. For a summary on various bounds for LRCs, see [23].
The pioneering work done in [24] improves on the bound (1) by including a dependence on the alphabet size in the bound, that is, for any LRC code over the alphabet with , we have
(3) 
where is the maximal dimension of a code over of length and minimum distance . This has led to further construction of optimal LRCs over small alphabets for example in [25, 26].
Recently, the authors of [27] proposed the first alphabetdependent bounds on LRC codes over using an upper bound on the cardinality of the repair sets given their size and local minimum distance with the extra requirement that the upper bound is a logconvex function of the size. The global bound is as follows:
(4) 
They also derived a linearprogramming bound for LRCs with locality
under the extra assumption that the repair sets are disjoint.Finally, in [28], the authors presented a Singletontype bound for binary linear LRCs. This bound uses the local dimension of a repair set instead of the parameter and a more precise understanding of the intersection between two repair sets. As such, the work in this paper generalizes these two ideas.
While so far we made no distinction between nonlinear and linear codes, the next results are only valid for linear codes. In [29], Griesmer proved the existence of a residual code for any binary linear code (over ), i.e., a code obtain by a restriction with certain specific parameters. He then derived a bound on the length of the code given the dimension and the minimum distance. The two results were later extended to an arbitrary field in [30]. We present here the last version. For any linear code over , there exists , a restriction of called the residual code of , such that has parameters . By recursively taking residual codes, the authors of [30] obtained the following bound on the length known as the Griesmer bound and denoted here by :
(5) 
Ia Our contributions
In this paper, we first highlight the differences between the initial motivation for introducing the notion of locality in [4, 5] and the definition of locality given in [6], where the authors decided to constrain the size of the repair sets. We show, through some examples, how the definition is imprecise regarding the number of nodes contacted during the repair process when the alphabet size is fixed. To remedy this, we introduce a new definition for locality called dimensionlocality and compare it to the first definition.
Then, we focus on linear LRCs and derive a new alphabetdependent bound of the type of the bound (3) for linear codes with dimensionlocality using the repair sets and chains of consecutive residual codes. Given the definition of the dimensionlocality, this bound also applies to linear LRCs with locality by using a weaker bound on the dimension of the repair sets. We also obtain a new Singletontype bound that reflects better the actual dimension of the repair sets as a corollary of our results. Furthermore, the new bound can be used to obtain the straightforward extension of the bound (3) for locality and the bound (2) which shows that our bound is always at least as good as these bounds.
Next, we derive the asymptotic formulas of the new bound and the new Singletontype bound when to obtain the bounds on the tradeoff between the rate and the relative minimum distance. We also use these formulas to compare our bounds to the bounds in [27] (Eq.(4) and (12) here). We show that there are cases where the new asymptotic Singletontype bound is better, equal or worse than the asymptotic version of the bound (2). The comparison with our main bound (8) is more direct as we can prove that there is always an interval in the relative minimum distance where the new bound is strictly better than the bounds in (2). Moreover, the improvement is quite significant since our bound benefits from the localityunaware bounds on the ratedistance tradeoff. As an example, Figure 1 displays the comparison between the known bounds for linear LRCs with locality over the binary field and the new bound (13), where we use the McElieceRodemichRumseyWelch bound as the intrinsic bound on the rate. Finally, we prove the achievability of the new bounds by studying the locality of Simplex codes and providing sporadic optimal examples.
The rest of the paper is organized as follows. In Section II, we discuss the relation between the initial motivation that led to the introduction of locality and the definition given in [6]. Then, we define the notion of dimensionlocality and compare it to the initial definition of locality. In Section III, we derive a new bound for linear LRCs with dimensionlocality and extend it to linear LRCs with locality . We obtain also a new Singletontype bound for these codes. In Section IV, we prove first that our bound is always as good as the straightforward extension of the bound (3) for locality and the bound (2). Then, we derive the asymptotic formulas for the new bounds and use them for the comparison to the bound (4). While the comparison with the new Singletontype bound depends on the parameters of the codes, we prove that our bound always beats the bound (4) for large relative minimum distance. Finally, in Section V, we provide a family of codes that achieves our bounds by studying the locality of Simplex codes.
Ii Mathematical preliminaries and locality revisited
We denote the set by and the set of all subsets of by . The set of all positive integers including is denoted by . For a lengthvector and a set , the vector denotes the restriction of the vector to the coordinates in the set . A linear code of length , dimension , and minimum distance is denoted by and a generator matrix for is where is a column vector for . The number of codewords in is the cardinality of , . The shortening of a code to the set of coordinates is defined by and the restriction of code to is defined by . For convenience, we call restricted codes the codes obtained by a restriction. For an linear code , if meets the Singleton bound, i.e., if , then is called a maximum distance separable (MDS) code.
To measure the dimension of restricted linear codes, or more generally, the amount of information contained in the restriction of an arbitrary codes, we use the notion of an entropy function on the subsets where is the length of the code. We state it here for quasiuniform codes over the alphabet . Quasiuniform codes are a general class of errorcorrecting codes, defined by the property that fibers of the projection have the same size. As such, the class of quasiuniform codes contains all linear codes, group codes, and almost affine codes. We refer to [32] for more information about quasiuniform codes and [33] for the entropy function on these codes.
Definition 1.
Let be a quasiuniform code of length over the alphabet and . The entropy associated to is the function with
For ease of notation, if the underlying code of is clear, we drop the specification to . For linear codes, this function measures exactly the dimension of the restricted codes and for a subset , is equivalent to the rank of the submatrix formed by the columns with or the rank function of in the associated matroid of . As such, it has the following standard properties.
Proposition 1.
Let be a quasiuniform code of length over the alphabet and the entropy function associated to . For , we have

,

If then ,

.
The entropy function also behaves nicely for restricted codes. Let and the restriction of to the set . Then for , we have .
Finally, we define a closure operation on the subsets of for linear codes.
Definition 2.
Let be an linear code and . The closure operator is
One can think of the closure operator via the generator matrix of where is the set of all columns in contained in the linear span of the columns indexed by .
The following table summaries the notations used throughout the paper. The formal definition of some of them will only appear later in the document.
Linear code of length , dimension and minimum distance  
LRC code with locality and the local minimum distance  
LRC code with dimensionlocality  
Griesmer bound on the length  
Bound on the dimension  
Logconvex bound on the dimension  
Logconvex bound on the cardinality  
Relative minimum distance  
reschain  Chain of consecutive residual codes 
Iia Definition of locality and relation with the number of nodes contacted for repairing
In this part, we explain how the definition of locality diverges from the initial motivation of introducing the notion of locality and state a new definition of locality called dimensionlocality. As mentioned in the introduction, [4] and then [5] introduced the notion of locality to reduce the repair traffic by designing storage codes such that one failure can be repaired by contacting only a small amount of nodes in the storage system. A natural extension of the above definition is to allow multiple erasures to be corrected locally by still accessing a fewer number of nodes than . For this, we need the local sets of nodes to have a minimum distance of so that up to erasures can be repaired locally. The definition presented in [6] is the following.
Definition 3.
An linear code has allsymbol locality if for all code symbol there exists a set such that

,

,

The minimum distance of the restriction of to the set is at least .
We refer to as an LRC code.
With this definition, any coordinates of are determined by the values of the remaining coordinates, thus enabling local repairing by contacting at most other nodes. The problem with Definition 3 is that it implicitly requires the repair sets to be MDS codes in order for to be the dimension of the local sets. In other words, if is not an MDS code, then the number of nodes needed to repair any failures in is strictly less than . Thus, Definition 3 diverges from the initial meaning of controlling precisely the number of nodes contacted during the repair process when considering nonMDS repair sets.
This observation is particularly relevant when the field size is fixed and is too large for MDS codes to exist. For example, if we consider binary codes and require that to correct more than one erasure locally then none of the repair sets can be MDS and is no longer the local dimension. We illustrate this phenomenon in a concrete example.
Example 1.
Let be the binary linear code given by the following generator matrix,
We define the three repair sets by their corresponding columns in : , and . Every repair sets has size , minimum distance , and dimension . Thus, we get and is a binary linear LRC code. However, even if , we can repair up to two failures by contacting at most nodes.
To be able to precisely keep track of the number of nodes contacted during the repair process, we propose a slightly different definition for locally repairable codes tolerating multiple erasures locally where we replace the condition on the size by a condition on the dimension.
Definition 4.
An linear code has allsymbol dimensionlocality if for all code symbol there exists a set such that

,

,

The minimum distance of the restriction of to the set is at least .
We refer to as an LRC code.
With this definition, we regain the fact that every coordinates can be recover by contacting at most other coordinates and can be made tight. When we do not restrict the field size, optimal repair sets will be MDS codes with size equal to and thus both definitions coincide. Definition 4 is also better for nonMDS repair sets when the field size is fixed as still measures the local dimension while represents partially the size and partially the dimension. On top of that, the new definition allows more flexibility on the size of the repair sets as it can be smaller or bigger than since and are only an upper bound and a lower bound respectively.
Obviously, every code with locality is a code with dimensionlocality . We can replace be a bound on the dimension given the size and minimum distance to also take into account nonMDS repair sets. Let be the maximal dimension obtained by such a bound. Then, every code with locality is a code with dimensionlocality . The problem is that might not be tight, i.e., there is no repair set such that which goes against the purpose of the new definition. This is illustrated in the following example.
Example 2.
Let be the binary linear code obtained by the direct sum of two and an binary linear codes. If we choose the repair sets to be the three codes in the direct sum, we obtain . The maximal size is given by the last repair set which has size . Therefore, and is an LRC code. The maximal dimension of a repair sets is so is an LRC code as well. However, every bound on the dimension is at least since the Hamming code as parameters . Thus, it is impossible to obtain the true minimal dimension from the parameters of Definition 3.
The last example also demonstrates how we can use the flexibility on the size obtained from Definition 4 to keep the dimensionlocality parameters intact while achieving a code of length that is not divisible by any of the sizes of the repair sets.
Iii Bounds for dimensionlocality and locality
In this section, we study the structure of linear codes with dimensionlocality and derive a bound on their parameters. Following the general framework of [24], we construct a set with a large size and a small dimension using a detailed analysis of the repair sets based on the work done in [29] and [30]. This yields to a bound of the form of the bound (3) handling both MDS and nonMDS repair sets. Then, we extend our bound to linear codes with locality
. Finally, using a weaker estimation of our results, we derive a new Singletontype bound for
LRC codes.We start by presenting the new bound for codes with dimensionlocality, where is an upper bound on the dimension and is the Griesmer bound on the size taking over .
Theorem 1.
Let be a linear LRC code over . Then we have
(6) 
where such that .
Proof.
Proof is given in the appendix. ∎
In order to prove this bound, we need a better understanding on the bound (3) and the implications of having a nonMDS repair set. The bound (3) relies mainly on two results. The first result is a construction of a set with an upper bound on the dimension and a lower bound on the size. The second result is a shortening argument that governs the part inside in the bound. The result is reproduced here with a slight rephrasing.
Lemma 1 ([24], Lemma 2).
Let be an linear code over and such that . Then the shortened code has parameters .
Regarding the first result, the technique used to construct large sets relies on taking the union of repair sets. If two repair sets happen to intersect, which will reduce both the entropy and the size of the union, a correction is performed by adding arbitrary elements to the union. The main difficulty to extend this technique to nonMDS repair sets is to deal with the intersection of the repair sets and find the appropriate correction. Indeed, the intersection of two repair sets can now have a size strictly larger than its entropy (take for example and in Example 1). Thus, it is not possible anymore to correct their union by an arbitrary set since it might exceed the upper bound on the entropy.
In order to correct the intersection, the main idea is to create a set using consecutive residual codes. As mentioned in the introduction, for any linear code over , there exists , a restriction of called the residual code of , such that has parameters . We define the sequence of consecutive residual codes as a chain of subsets of .
Definition 5.
Let be a linear code over . The reschain of is a sequence of sets with constructed recursively by starting with and is such that is a residual code of .
This definition is welldefined since by the proof of Theorem in [30], the residual code of is constructed by restricting to a wellchosen set of coordinates. Therefore we can interpret the recursive residual code chain as a sequence of sets in . Furthermore, as the dimension of the residual code is one less than the dimension of the code, the chain has length and for all , there is a set in the reschain of such that . Finally, by a recursive argument, if is a set in the reschain of , then the minimum distance of is bounded from below by
We will now present two lemmas that are used to prove Theorem 1. The first lemma states how to correct a set when adding a repair set exceed the remaining entropy.
Lemma 2.
Let be a linear LRC code over . Let be such that and an integer with . If there exists a repair set such that , then, there exists with such that

,

.
Proof.
Proof is given in the appendix. ∎
Using the above lemma, we can prove the following second lemma that represents the challenging part of proving the new bound.
Lemma 3.
Let be a linear LRC code over and such that and . Then there exists with such that

,

.
Proof.
Proof is given in the appendix. ∎
While looking at the proofs, we can see that if all the repair sets are disjoint and have dimension , then Lemma 3 follows directly since no correction is needed. If the repair sets intersect each other or have dimension less than , we gain a little margin in the entropy of the union to perform a correction. We then use the chain of residual codes to get a set that, when added to the union, increases the entropy by exactly the amount left. The last trick is to evaluate both the size of the union and the set in the reschain using the Griesmer bound. First, it is a bound on the size where the minimum distance plays a more important role compared to the dimension which fits the lower bound on the local minimum distance for codes with dimensionlocality. Secondly, the Griesmer bound has the nice property that . The first term in the sum can be used to get a lower bound on the size of the repair set minus its intersection while the second term, under some conditions, gives a lower bound on the size of a particular set in the reschain of a repair set. Thus, this relation is really useful when we add the extra set to correct the union. Finally, the Griesmer bound is also consistent with our construction based on residual codes.
As a corollary of Theorem 1, we can force the dimension to only be a multiple of to obtain a bound that resemble the original one.
Corollary 1.
Let be a linear LRC code over . Then we have
(7) 
Despite that the wider range of parameters makes the bound (6) theoretically better than this bound, the two show similar experimental results. More precisely, we randomly generated some feasible parameters for LRC codes over the binary field which yielded results showing that the bound (7) is equal to the bound (6). One possible justification is that for two consecutive dimensions and inside the minimum in (6), the length in the second term decreases by the largest value when . Therefore, the optimal condition on the global dimension would always happen when is a multiple of the local dimension . However, a formal proof is not possible due to the unknown intrinsic bound .
Iiia New bounds for locality
As already explained in section II, to get a bound on LRCs with locality instead of dimensionlocality , we can estimate via an upper bound on the maximal dimension given the size and the minimum distance . Let us call the dimension obtained via any upper bound. Since we never used that is actually tight, our previous results apply directly to codes with locality via the estimated dimension . Therefore, we get the following new bound.
Theorem 2.
Let be a linear LRC code over and the upper bound on the local dimension. Then
(8) 
where such that .
It is really important to estimate the size in the shortened part of the bound via the Griesmer bound instead of replacing it by . The reason is that is an upper bound on the size while we need something of the form of a lower bound. However, what we need is not exactly a lower bound since the dimension of a repair set can be lower than . We present a small counterexample.
Example 3.
Let be the binary linear code given by the following generator matrix
is a LRC code with obvious repair sets. Estimating using the Griesmer bound yields . However, there is no sets of dimension less than and size greater than since every set of size has already a dimension equal to . The problem here is that the repair set of size , which gives the upper bound , has minimum distance strictly greater than .
Using Lemma 3, we can also derive a Singletontype bound that will take into consideration when repair sets are not MDS.
Theorem 3.
Let be a linear LRC code over and the upper bound on the local dimension. Then
(9) 
where .
Iv Analysis and comparisons
This section is devoted to the comparison between our bounds and the previously known bounds for LRC codes. In the first part, we show that the bound (8) leads to the straightforward extension of the bound (3) for locality and the bound (9) leads to the Singletontype bound (2) when the field size is sufficiently large. In the second part, we derive the asymptotic formulas of the bounds (8) and (9) to obtain the bounds on the tradeoff between the rate and the relative minimum distance for fixed locality. This also enables a cleaner comparison between the new bounds and the bound (4) from [27]. Notice that we will not compare our bounds to the linear programming bound derived in [27] since it is not possible to derive an asymptotic formula from it and we do not assume that the repair sets are disjoint.
Our results show that the comparison between the new asymptotic Singletontype bound and the asymptotic version of bound (4) depends on the performance of the Griesmer bound compared to the logconvex bounds as we can find some examples where the new bound is better, equal, or worse than the bound (4). By using the Plotkin bound as the intrinsic bound in (8), we prove that the bound (8) is always better than the bound (4) for large relative minimum distances.
Corollary 2.
Let be a linear LRC code over . Then
(10) 
To the best of our knowledge, although this extension of the original bound for locality is straightforward, it has not previously appeared in the literature.
Proof.
Let be the upper bound on the local dimension. We want to show that for all with , there is a set with and . For fixed, define such that . By the same arguments as in the proof of Theorem 1, there exists a set such that . It remains to show that . First we have since . Now is an integer so . Using the fact that the Griesmer bound is greater or equal than the Singleton bound, i.e., , we have
Hence using Lemma 1 with this approximation on the size, we obtain the desired bound on . ∎
Now we prove that the new Singletontype bound can be used to obtain the bound (2).
Proof.
This shows that the bounds (8) and (9) are at least as good as the bounds (10) and (2) respectively. Furthermore, we can see that the bounds (8) and (9) improve on the previous bounds when or when . The latter case is of particular interest for small alphabets. For example, when considering binary LRC codes, the new bounds are already better than the bound (2) for all .
Iva Asymptotic regime
For the rest of this section, we look at the asymptotic regime where . Let be the rate of the code and the relative minimum distance. Usually, the relative minimum distance is denoted by but here we reserve for the local minimum distance. The goal is to obtain the bounds on the tradeoff between the rate and the relative minimum distance when the locality is fixed and . This also makes the comparison to the bound (4) easier.
We start with the Singletontype bound (9). By dividing the bound (9) by and letting , its asymptotic formula is as follows :
(11) 
Following the same method, we can derive the asymptotic version of the bound (4). For the ease of reading, we reproduce here the bound : For any LRC over and a bound on the cardinality of a code which is logconvex in and such that , we have
Its asymptotic version is therefore :
(12) 
The following table summaries the asymptotic formulas for the Singletontype bounds with different locality assumptions. Notice that the last three are truly comparable since they share the same locality assumptions. When looking at the table, we can see how the locality assumption reduces the rate by the fraction of the local dimension over the local size.
Following the method in [24], we can derive the asymptotic formula for the bound (8). Define . By dividing the bound (8) by , we obtain its asymptotic version
(13) 
We can now compare the asymptotic formulas between (11), (13) and (12). Notice that for linear codes, is a bound on the dimension of the local sets. From now on, we denote by the bound . By definition gives a valid upper bound on the dimension in Theorem 2. However, it might happen that the best bound on the dimension is not logconvex and hence . In particular, the Griesmer bound on the cardinality is not a logconvex function on as demonstrated next.
Remember that a positive function of the integer argument is called logconvex if for any in the support of . For any linear code over , the Griesmer bound on given and , denoted by , is obtained by taking the maximal such that . Thus, it gives a bound on the cardinality, . Let us consider the parameters , and . Then, we obtain
Hence we have and the Griesmer bound on the cardinality is not logconvex on . Therefore there is no obvious answer to the comparison between the bounds (11) and (12) since we need to compare and and both the numerator and the denominator of the former are smaller or equal than their respective correspondents in the latter.
To be more specific, it mainly depends on the performance of the Griesmer bound compared to the logconvex bounds. For example if there exists a logconvex bound such that but , then the bound (12) is strictly better than the new Singletontype bound. This is illustrated in Figure 2 which displays the ratedistance tradeoff for binary codes with locality . To evaluate the local dimension, we use the Hamming bound as a logconvex bound to get which is optimal. The Griesmer bound gives . Hence the green line representing the bound (12) is better than the orange line displaying the bound (11).
On the other hand, if for all logconvex bound then (11) is strictly better than (12) because we have . Since it is impossible to give a proper example due to the fact that we would need to prove it for all logconvex bound, we restrict here the comparison by considering the three bounds proven to be logconvex in [27], namely the Singleton, Hamming and Plotkin bounds. Let be a linear LRC code with locality . The Singleton bound gives an upper bound on the local dimension of 12 and the Hamming bound gives a bound of 7. The Plotking bound is not applicable here since . Now, the Griesmer bound on the dimension gives an upper bound of and is therefore better than the Hamming bound. As we can see in Figure 3 displaying the asymptotic bounds for binary codes with locality , the bound (11) in orange is always better than the bound (12) in green.
Finally, if and , the two bounds become the same. This happens for example in Figure 1 where the locality is and both the Griesmer and the Plotkin bound give and .
Nonetheless, these are just special cases of the comparison between the bounds (11) and (12) and the final comparison needs to be done casespecific since it depends on three parameters impossible to compute theoretically which are the best upper bound on the dimension , the best logconvex upper bound and the performance of the Griesmer bound regarding both and .
The comparison between the bound (12) and the bound (13) is more straightforward. Since the latter is at least as good as (11), we automatically get that the bound (13) is stronger than the bound (12) when the new Singletontype is stronger or equal than the bound (12).
Furthermore, we will see that the bound (13) is always stronger than the bound (12) for large relative minimum distances, i.e., there is a threshold value such that for all relative distance , the bound (13) is better than the bound (12).
To prove this, we use the asymptotic Plotkin bound for given by
Combining it with the bound (13) and solving the optimization problem yields the following bound on the rate
(14) 
We can now state formally our claim.
Proposition 3.
Proof.
Proof is given in the appendix. ∎
The proof follows from the fact that the bounds (14) using and the bound (12) are two lines with different slopes and that the bound (14) becomes equal to when is larger than . Thus, the two lines intersect exactly in and the bound (14) is better than (12) for relative minimum distances strictly greater than .
Finally, any bound on the rate that improves on the asymptotic Plotkin bound will thus increase the size of the interval where the bound (14) is better than the bound (12). In particular, this is true for the ratedistance bound given in [31] which is the best known bound for binary code. The MRRW bound is as follows:
(15) 
where is the binary entropy function.
We can thus replace the asymptotic Plotkin bound by the MRRW bound in 13 and, by numerically solving the optimization problem, we obtain the red curve in Figure 1, 2 and 3. We see that the bound (13) combining with the MRRW bound improves significantly on the bound (12) even when the Griesmer bound is not equal to the maximal local size.
V Achievability results
Several constructions of codes achieving the bound (2) already exist for example in [6, 16, 17, 21, 22]. Many of these constructions require an alphabet size to be exponential in the code length. Since the bound (8) approaches the bound (2) for large alphabets, the bound (8) is indeed tight. We show in this section by considering the family of Simplex codes that the bound (8) is also tight for some parameter values for every fixed field size, in particular small ones.
Definition 6.
Let and be an matrix over with nonzero pairwise independent columns. The code with as a generator matrix is called a ary Simplex code with parameters .
Since Simplex codes are known to achieve the Griesmer bound, they achieve the bound (8) by taking and using the Griesmer bound for . Therefore, the locality parameters do not influence the optimality of the code. This is in fact true in general. If a code already achieves a bound on without locality constraints and has a certain locality, then it will be an optimal locally repairable code for these locality parameters.
Let us study the locality of the Simplex code . Simplex codes, as locally repairable codes with , were already considered in [24] and in [25] where the authors used them to construct new LRCs. Here, we want to derive the locality for larger dimensions and .
The first thing to notice is that for every coordinate , there exists a codeword in different from the zero codeword such that . Indeed, it is enough to take two different codewords not a multiple of each other and subtract them in an appropriate manner to obtain the desired codeword. Since every codeword of has the same weight, we can take the residual code associated to which is the Simplex code . Therefore, by recursion, for every , the coordinate is contained in a Simplex code obtained by a restriction of . Since the minimum distance of is , by letting , we assure that . Hence the Simplex code has dimensionlocality for all . Finally, .
To get examples that achieve the bounds (8) and (9) in a less obvious manner, we will prove that all the examples presented in this paper are optimal. Let us start with Example 1 where the code is a binary LRC with . By using the Plotkin bound, we get and thus . We can now compute the bound (9):
Hence, is optimal.
In Example 2, we presented a binary code with parameters and . Since the purpose of this example is to illustrate the fact that might be not equal to , it is necessary here to use the bound (6) instead of (8). Let . By using the Plotkin bound for , we have
Hence, this code reaches the bound (6).
Finally, the binary code in Example 3 has parameters and . By using the Plotkin bound, we get and . Let . We compute the bound (8) using the Plotkin bound for . We have
Hence, the code is an optimal LRC code.
Interestingly enough, to prove the optimality of both codes from Examples 2 and 3, we used a set of dimension in the bounds (6) and (8). However, none of the codes reaches the Singletontype bound (9) obtained via a set of the same dimension. This is because we have an extra dependency on the field size by the bound . Indeed there is no binary code with parameters , while there is already an MDS code satisfying these parameters over .
References
 [1] A. Dimakis, P. B. Godfrey, Y. Wu, M. J. Wainwright, and K. Ramchandran, “Network coding for distributed storage systems,” IEEE Transactions on Information Theory, vol. 56, no. 9, pp. 4539–4551, 2010.
 [2] V. Guruswami and M. Wootters, “Repairing ReedSolomon codes,” IEEE Transactions on Information Theory, vol. 63, pp. 5684–5698, 2016.
 [3] A. S. Rawat, I. Tamo, V. Guruswami, and K. Efremenko, “MDS code constructions with small subpacketization and nearoptimal repair bandwidth,” IEEE Transactions on Information Theory, vol. 64, pp. 6506–6525, 2017.
 [4] P. Gopalan, C. Huang, H. Simitci, and S. Yekhanin, “On the locality of codeword symbols,” IEEE Transactions on Information Theory, vol. 58, no. 11, pp. 6925–6934, 2012.
 [5] D. Papailiopoulos and A. Dimakis, “Locally repairable codes,” in International Symposium on Information Theory. IEEE, 2012, pp. 2771–2775.
 [6] N. Prakash, G. M. Kamath, V. Lalitha, and P. V. Kumar, “Optimal linear codes with a localerrorcorrection property,” in International Symposium on Information Theory. IEEE, 2012, pp. 2776–2780.
 [7] G. M. Kamath, N. Prakash, V. Lalitha, and P. V. Kumar, “Codes with local regeneration,” 2013 Information Theory and Applications Workshop (ITA), pp. 1–5, 2013.
 [8] A. Wang and Z. Zhang, “Repair locality with multiple erasure tolerance,” IEEE Transactions on Information Theory, vol. 60, no. 11, pp. 6979–6987, 2014.
 [9] N. Prakash, V. Lalitha, and P. V. Kumar, “Codes with locality for two erasures,” 2014 IEEE International Symposium on Information Theory, pp. 1962–1966, 2014.
 [10] A. S. Rawat, A. Mazumdar, and S. Vishwanath, “Cooperative local repair in distributed storage,” EURASIP J. Adv. Sig. Proc., vol. 2015, p. 107, 2015.
 [11] A. Mazumdar, “Storage capacity of repairable networks,” IEEE Transactions on Information Theory, vol. 61, pp. 5810–5821, 2015.
 [12] A. S. Rawat, D. S. Papailiopoulos, A. G. Dimakis, and S. Vishwanath, “Locality and availability in distributed storage,” IEEE Transactions on Information Theory, vol. 62, no. 8, p. 4481–4493, Feb 2016.
 [13] I. Tamo, A. Barg, and A. Frolov, “Bounds on the parameters of locally recoverable codes,” IEEE Transactions on Information Theory, vol. 62, no. 6, pp. 3070–3083, 2016.
 [14] P. Huang, E. Yaakobi, H. Uchikawa, and P. H. Siegel, “Binary linear locally repairable codes,” IEEE Transactions on Information Theory, vol. 62, pp. 5296–5315, 2016.
 [15] C. Huang, M. Chen, and J. Lin, “Pyramid codes: Flexible schemes to trade space for access efficiency in reliable data storage systems,” in International Symposium on Network Computation and Applications. IEEE, 2007, pp. 79–86.
 [16] G. M. Kamath, N. Prakash, V. Lalitha, P. V. Kumar, N. Silberstein, A. S. Rawat, O. O. Koyluoglu, and S. Vishwanath, “Explicit MBR allsymbol locality codes,” 2013 IEEE International Symposium on Information Theory, pp. 504–508, 2013.
 [17] A. S. Rawat, O. O. Koyluoglu, N. Silberstein, and S. Vishwanath, “Optimal locally repairable and secure codes for distributed storage systems,” IEEE Transactions on Information Theory, vol. 60, pp. 212–236, 2014.
 [18] I. Tamo, D. Papailiopoulos, and A. Dimakis, “Optimal locally repairable codes and connections to matroid theory,” IEEE Transactions on Information Theory, vol. 62, pp. 6661–6671, 2016.
 [19] I. Tamo and A. Barg, “A family of optimal locally recoverable codes,” IEEE Transactions on Information Theory, vol. 60, no. 8, pp. 4661–4676, 2014.
 [20] S. Goparaju and A. R. Calderbank, “Binary cyclic codes that are locally repairable,” 2014 IEEE International Symposium on Information Theory, pp. 676–680, 2014.
 [21] T. Ernvall, T. Westerbäck, R. FreijHollanti, and C. Hollanti, “Constructions and properties of linear locally repairable codes,” IEEE Transactions on Information Theory, vol. 62, pp. 5296–5315, 2016.
 [22] T. Westerbäck, R. FreijHollanti, T. Ernvall, and C. Hollanti, “On the combinatorics of locally repairable codes via matroid theory,” IEEE Transactions on Information Theory, vol. 62, pp. 5296–5315, 2016.
 [23] R. FreijHollanti, C. Hollanti, and T. Westerbäck, “Matroid theory and storage codes: bounds and constructions,” 2017, arXiv: 1704.0400.
 [24] V. Cadambe and A. Mazumdar, “An upper bound on the size of locally recoverable codes,” in International Symposium on Network Coding, 2013, pp. 1–5.
 [25] N. Silberstein and A. Zeh, “Anticodebased locally repairable codes with high availability,” Des. Codes Cryptography, vol. 86, pp. 419–445, 2018.
 [26] A. Zeh and E. Yaakobi, “Optimal linear and cyclic locally repairable codes over small fields,” 2015 IEEE Information Theory Workshop (ITW), pp. 1–5, 2015.
 [27] A. Agarwal, A. Barg, S. Hu, A. Mazumdar, and I. Tamo, “Combinatorial alphabetdependent bounds for locally recoverable codes,” IEEE Transactions on Information Theory, vol. 64, pp. 3481–3492, 2018.
 [28] M. Grezet, R. FreijHollanti, T. Westerbäck, and C. Hollanti, “Bounds on binary locally repairable codes tolerating multiple erasures,” in The International Zurich Seminar on Information and Communication (IZS 2018) Proceedings. ETH Zürich, 2018, pp. 103–107.
 [29] J. H. Griesmer, “A bound for errorcorrecting codes,” IBM Journal of Research and Development, vol. 4, pp. 532–542, 1960.
 [30] G. Solomon and J. J. Stiffler, “Algebraically punctured cyclic codes,” Information and Control, vol. 8, pp. 170–179, 1965.
 [31] R. J. McEliece, E. R. Rodemich, H. Rumsey, and L. R. Welch, “New upper bounds on the rate of a code via the DelsarteMacWilliams inequalities,” IEEE Trans. Information Theory, vol. 23, pp. 157–166, 1977.
 [32] T. Chan, A. J. Grant, and T. Britz, “Properties of quasiuniform codes,” 2010 IEEE International Symposium on Information Theory, pp. 1153–1157, 2010.
 [33] T. Westerbäck, M. Grezet, R. FreijHollanti, and C. Hollanti, “On the polymatroidal structure of quasiuniform codes with applications to heterogeneous distributed storage,” in International Symposium on Mathematical Theory of Networks and Systems, 2018.
Comments
There are no comments yet.