A theoretical analysis of the error correction capability of LDPC and MDPC codes under parallel bit-flipping decoding

10/01/2019 ∙ by Paolo Santini, et al. ∙ UnivPM 0

Iterative decoders used for decoding low-density parity-check (LDPC) and moderate-density parity-check (MDPC) codes are not characterized by a deterministic decoding radius and their error rate performance is usually assessed through intensive Monte Carlo simulations. However, several applications like code-based cryptography need guaranteed low values of the error rate, which are infeasible to assess through simulations, thus requiring the development of theoretical models for the error rate of these codes under iterative decoding. Some models of this type already exist, but become computationally intractable for parameters of practical interest. Other approaches attempt at approximating the code ensemble behaviour through assumptions, which however are hardly verified by a specific code and can barely be tested for very low error rate values. In this paper we propose a theoretical analysis of the error correction capability of LDPC and MDPC codes under a single-iteration parallel bit-flipping decoder that does not require any assumption. This allows us to derive a theoretical bound on the error rate of such a decoding algorithm, which hence results in a guaranteed error correction capability for any single code. We show an example of application of the new bound to the context of code-based cryptography, where guaranteed error rates are needed to achieve some strong security notions.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

It is recognized that avoiding short cycles in the Tanner graph representation of the parity-check matrix of a LDPC code is important, because small girths have an adverse impact on the error rate performance of iterative decoders [2, 3]. Nevertheless, there are some applications in which the adoption of codes with small girth is unavoidable. For example, this is the case of the LDPC and MDPC codes used for code-based post-quantum cryptography [4, 5]. In this context, which is experiencing an increasing interest by the scientific community due to the NIST standardization initiative of post-quantum cryptosystems [6], the structure of the parity-check matrix is mainly dictated by security issues. This yields unavoidable cycles of length in the relevant codes. Moreover, in these systems the sparse parity-check matrix of an LDPC or MDPC code is used as a secret key, and is hence designed at random, thus often yielding a large number of cycles of length .

Contrary to bounded distance decoders, iterative decoders commonly used for LDPC and MDPC codes are not characterized by a deterministic decoding radius. This implies the existence of a residual error rate that is difficult to model theoretically, and is hence usually assessed through Monte Carlo simulations. Nevertheless, there are applications in which extremely low error rates are required. One of this cases is again in the area of code-based cryptography, where error rates as low as or less are required to avoid some types of attacks [7, 8, 9, 10]. Obviously, such low values of the error rate are infeasible to assess through numerical simulations.

At the same time, low-complexity iterative decoders are important in many applications in which high throughputs have to be achieved. The best known decoder of this type is Gallager’s BF decoder [11]. Starting from its basic principle, several variants of BF have been proposed. Among them, in this paper we focus our attention on the so-called parallel BF. Roughly speaking, the parallel BF algorithm operates as follows. At each iteration, all parity checks are computed: all bits involved in a number of unsatisfied parity-check equations overcoming some suitably chosen threshold are flipped, and the syndrome is accordingly updated. The procedure is iterated, until a null syndrome is obtained or a maximum number of iterations is reached. Following a more general approach than [12], where parallel BF is introduced, we consider a threshold that is not fixed, but rather depends on some features of the code under consideration.

Based on the above considerations, an important research challenge is represented by the development of analytical tools able to foresee the number of errors that an iterative decoder can correct. A vast body of literature exists on this subject [13, 14, 15, 16, 17], which permits to determine lower and upper bounds on the guaranteed error correction capability of the code. Many of these approaches use expander graph based arguments [15, 16], whose application, however, is known to be NP-hard [18] and can be used for a limited number of cases and under specific constraints. Moreover, the bounds these methods provide are often loose, particularly in case of small girths.

In this paper, first we make some considerations on the guaranteed error correction capability of LDPC and MDPC codes with small girth, which generalize the approach proposed in [19, 1]. In [19], in particular, a majority-logic decoder is considered and it is shown that its error correction capability depends on the maximum number of superimpositions between any two columns of the code parity-check matrix. This allows deriving conditions under which a single iteration of this decoder corrects all errors up to a given weight. These results are extended in [1], where a more general decoder is considered and tighter bounds are derived. We show that such bounds are indeed tight if the girth of the considered codes is small.

Moreover, we provide an upper bound on the error rate of LDPC and MDPC codes for the first iteration of BF decoding which, differently from that in [1], does not rely on any specific assumption. We remark that some lower and upper bounds on the error rate under BF decoding are also proposed in [20]

, but their computation requires pre-processing of all possible initial error patterns with weight up to a certain value; thus, the approach becomes quickly unfeasible as the error probability of the channel decreases or as error patterns with too large weight have to be considered. The same remark holds for the approaches proposed in

[21, 22]

, which allow estimating the error rate of LDPC codes under BF decoding. Our approach instead is fully analytical, and does not require any preliminary simulation.

The paper is organized as follows. In Section II we introduce the notation used throughout the paper and recall some basic notions of LDPC and MDPC codes. In Section III we discuss the error correction capability of codes with small girth under BF decoding. In Section IV we provide an upper bound on the error rate of LDPC and MDPC codes under BF decoding. In Section V we show an application of the derived bounds to code-based cryptography. Finally, we draw some conclusions in Section VI.

Ii Notation and definitions

We use capital letters to denote sets, adopting caligraphic fonts for sets of vectors. The cardinality of a set

(or ) is denoted as (or ). Given a set , we use to express the fact that is randomly extracted among all the elements of , and the same notation is used for sets of vectors. Let be a function with domain and codomain ; then, we use to denote the image of .

The binary Galois field is denoted as . We use small bold letters to denote vectors, and capital bold letters to denote matrices. Given a matrix , its entry at position is denoted as and its -th column is denoted as . Given a vector , we refer to its -th entry as . Given a set , we have . The AND, OR and ex-OR operations are denoted as , and , respectively. The Hamming weight and the support of any vector are denoted as and , respectively. The set of integers between and is denoted as . We denote the set of all binary vectors of length and Hamming weight as .

Ii-a LDPC and MDPC codes

A binary LDPC code is the null space of a binary parity-check matrix containing a small number of ones compared to the total number of entries. Denoting the code blocklength as and the code dimension as , has rows and columns. The syndrome of a binary vector is defined as , where denotes transposition and the product is performed over . Any codeword belonging to the code defined by has an all-zero syndrome. The -th column and -th row of have weight and , respectively. The code is said to be -regular if each column of contains exactly ones and each row contains exactly ones. Regular LDPC codes are generally characterized by , whereas regular MDPC codes are characterized by . Regardless of such a distinction, these two families of codes have quite similar properties and the analysis that will be developed in the following is equally valid for both of them.

Definition 1

Given a matrix , let denote its -th column; then, the adjacency matrix of , denoted as , is the matrix whose element in position is such that

The adjacency matrix is commonly employed in graph theory: given an undirected multigraph with nodes, the adjacency matrix can be defined as the matrix whose element in position is equal to the number of edges connecting nodes and . Obviously, starting from a parity-check matrix , we can construct a graph111We remark that this graph, which is not bipartite, is different from the Tanner graph [23] of the code. with nodes, such that the -th and the -th node are connected by edges.

Ii-B Bit flipping decoding

Let us describe a general version of the parallel BF algorithm focusing on a single iteration. Decoder inputs are a syndrome and a vector of integers , such that , . For each , the number of unsatisfied parity-check equations involving the -th bit is computed; we denote such a number as . The decoder considers as “error affected” all the bits for which and, thus, returns as output a vector with support . So, has the meaning of a decision threshold for the -th bit. Decoding is successful if . An interesting special case considered next is that in which , which boils down to a majority-logic decoder when .

Iii Guaranteed error correction capability of bit flipping

Let us provide some preliminary definitions taken from [1], with some adaptations.

Definition 2

Given , let us consider the rows of indexed by and put them into a matrix . Following [1], we define as the -th partial parity-check matrix. The -th column of is denoted as . We also define

where is a set containing the indexes of columns of , except for the -th. We call the maximum column intersection of order , and denote as , the quantity defined as

When , we call the maximum column intersection and, for simplicity, we denote it as ; it is easy to see that corresponds to the maximum number of set positions in which two columns of overlap. We remark that, if the code has girth larger than , then the supports of any two columns intersect in at most one position, thus we have .

The above notions can be easily related to the entries of the adjacency matrix. For instance, the weight of the -th column of the -th partial parity-check matrix is equal to the -th element of the matrix , , and the maximum column intersection corresponds to the largest entry of . For a code with girth larger than , the adjacency matrix is a binary matrix.

Definition 3

Given and the corresponding adjacency matrix , we denote as the vector formed by the elements of the -th row of , except for the -th one. We define as the sum of the largest entries of . We then define the maximum column union of order , denoted as , the quantity

(1)

Iii-a Bounds on the error correction capability

The following theorem from [19] shows that the error correction capability of a code decoded with a majority-logic decoder is related to the maximum column intersection.

Theorem 1

[19] Consider a code defined by a parity-check matrix for which every column has weight at least and whose maximum column intersection is . Majority-logic decoding on this matrix allows the correction of all error vectors with weight , where .

Corollary 1

Consider a code with defined by a parity-check matrix for which every column has weight at least . Majority-logic decoding on this matrix allows the correction of all error vectors with weight , where .

Proof:

It is a straightforward consequence of the fact that, if , the maximum column intersection is 1. ∎

As mentioned in the Introduction, these results are generalized in [1], where it is shown that the guaranteed error correction capability under BF decoding can actually be expressed by taking into account the interplay of more than two columns, that is, assuming .

Theorem 2

[1] Let us consider a code defined by a parity-check matrix in which every column has weight at least . Let be an integer such that

Then a BF decoder with variable decoding thresholds

(or fixed decoding threshold ) corrects all the error vectors of weight in one iteration.

If we denote by the largest integer such that Theorem 2 is satisfied, and assume that , then Theorem 2 allows correction of all the error vectors with weight smaller than or equal to .

Let us now specialize Theorem 2 to -regular codes with girth . When , the weight of the columns of any partial parity-check matrix is either or . In particular, any partial parity-check matrix contains all-zero columns and columns with weight . As any partial parity-check matrix has rows, it follows that

which is obtained by considering different columns. Then, according to Theorem 2, we have that

with threshold if is even (corresponding to a majority-logic decoder), and if

is odd. In other words, when

, Theorem 1 and Theorem 2 express the same error correction capability, with Theorem 2 giving an additional choice on the decision threshold when is odd. When , instead, as proved in [1], the bound given in Theorem 2 is never smaller than that given in Theorem 1, which means that the new bound is tighter.

As discussed above, Theorem 2 guarantees correction of all error vectors up to a given weight only if is a non-decreasing function for all . This assumption is reasonable for sparse parity-check matrices, but it may not be verified for any choice of ; thus, we state the following theorem, based on the adjacency matrix , which does not rely on any assumption. Theorem 3 provides an upper bound on the error correction capability that is smaller than or equal to the one given by Theorem 2, but larger than or equal to the one given by Theorem 1.

Theorem 3

Let us consider a code defined by a parity-check matrix in which every column has weight at least . Let be an integer , where is the largest integer such that

(2)

Then a BF decoder with decoding thresholds

(3)

corrects all the error vectors of weight smaller than or equal to in one iteration.

Proof:

Let denote the number of unsatisfied parity-check equations in which the -th bit participates, and denote the weight of the -th column in . Let us denote by the error vector; if , then we have

(4)

In the same way, when the -th bit is error free, that is, , we have

(5)

Clearly, one iteration of BF decoding can correct any error vector of weight if, , there exists a value of such that

(6)

Inserting (III-A) and (III-A) into (6), we obtain

(7)

which implies

(8)

According to (7), any guarantees that all bits such that are characterized by values of that never exceed and, thus, are not flipped; oppositely, all bits such that are characterized by values of larger than or equal to , and thus are flipped. ∎

Iii-B Comparison with previous approaches

Let us compare our bounds on the error correction capability with those in [15]. We remark that our bounds are referred to a single decoding iteration, whereas those in [15] are referred to an unspecified number of decoding iterations. Despite this, as shown in the following, for small values of our bounds are tighter than those in [15].

Theorem 4

[15] For a code defined by a parity-check matrix with girth in which every column has weight , BF decoding with decoding threshold allows correction of all error patterns of weight less than

(9)

For , and , the bounds on the error correction capability computed according to (9) are , and , respectively. So, for (9) is useless. On the contrary, the error correction capability given by Theorem 2 is not null on condition that , that is, being by definition, if . So, contrary to (9), as long as does not contain repeated columns, Theorem 2 guarantees a significant error correction capability, just after one decoding iteration. Several examples are reported in [1], where it is also shown that even the values resulting from Theorem 3 (that, we remind, are more conservative than those from Theorem 2) are often significantly larger than those obtained from Theorem 1.

For , we have and the error correction capability given by Theorem 2 coincides with that given by Theorem 3, resulting in . Notice that the previous inequality, which compares the error correction capability given in Theorem 3 (left hand side) and that resulting from (9) (right hand side), holds with the equality sign only for and . To be more explicit, the gap between the correction capability foreseen by Theorem 2 and that obtained through (9) becomes higher and higher for increasing , which is a significant issue in view of the application to code-based cryptography, where may assume relatively large values.

For , Theorem 2 and Theorem 3 result in , whereas (9) results in . So, since , the bounds are the same for odd values of , whereas the bound we provide in Theorem 2 and Theorem 3 is larger by than that given in (9) for even values of .

Finally, for , the bounds given by (9) are always larger than those given by Theorem 2 and Theorem 3.

So, based on the above considerations, we can conclude that the major impact of the present analysis and, similarly, of the analysis in [19, 1], occurs for codes with and .

Iv Analysis of the decoding failure probability for the first iteration of BF decoding

In this section we derive a conservative bound for the decoding failure probability, denoted as ,333Notice that the decoding failure probability coincides with the expected value of the FER. of the first and only iteration of a BF decoder, with decoding thresholds , applied on a syndrome , where . Having a fixed number of errors () is a scenario of interest in code-based cryptography, in which encryption is performed by intentionally corrupting a codeword with a constant number of errors. Nevertheless, once having characterized the decoder performance for a given number of errors, it is easy to extend such a characterization to channel models (like the BSC) in which the statistic of the number of errors is known. In fact, a BSC with crossover probability can be straightforwardly studied by considering that the probability that the channel introduces exactly errors is equal to . So, denoting the error vector after the first iteration as , the decoding failure probability over the BSC can be computed as

(10)

where can be upper bounded through the method we describe in this section. For the sake of brevity, from now on we only focus on the case in which is constant and fixed.

For , we define

as the binary variable obtained through the following rule

(11)

In other words, when , the decoder takes a right decision on the -th bit. Conversely, when , the decoder takes a wrong decision on the -th bit; a wrong decision can either be the flip of an error-free bit or the missing flip of a bit affected by an error. The error patterns that cause a decoding error in the -th position are defined by the so-called error sets, which we introduce next.

Definition 4

Let be the parity-check matrix of a code with blocklength . We consider the first and only iteration of a BF decoder, with decoding thresholds . Let be the binary variable defined as in (11), for . Then, for , we define the error set for the -th bit as follows

As we show in the following, a fundamental quantity in establishing the error correction capability of the first iteration of a BF decoder is represented by the cardinality of the error sets; to this end, in order to ease the notation, we define the following quantity.

Definition 5

Let be a set of distinct integers , with and , . We define as the ensemble containing all such distinct non-ordered sets; clearly, . Let and let be a length- vector of non-negative integers; then, we define

Additionally, we define as the bijective function that maps each vector in into its support. For , we have and provides an isomorphism between and . As we show in the next sections, the cardinality of such sets is fundamental for our analysis; a naive approach would require to test subsets, which clearly is not feasible when the values of and are non-trivial and significantly different one each other. By exploiting the fact that, for the cases we consider, the integer values are all non-negative and rather small, we can devise an approach with reduced complexity, as described in Appendix A.

Iv-a Decoding failure probability analysis based on the error sets

Let us introduce a property of the error sets that will then be used to derive the main result reported in Theorem 5.

Lemma 1

Let be a parity-check matrix, and let , for , be the error set for the -th bit. We denote with the vector formed by the entries of the -th row of the adjacency matrix , defined in Section II, except for the -th one. Then, we have

(12)
(13)
Proof:

We focus on the -th bit, and derive the conditions upon which the decoder takes a wrong decision (i.e., ). We first consider the case of : a wrong decision is taken if . Then, a necessary but not sufficient condition for having can be derived from (III-A) as . We have

from which

Similarly, for the case of , we can derive from (III-A) that a necessary but not sufficient condition for is . Then, we have

from which

(14)

Based on these relationships, we can now prove the following main theorem.

Theorem 5

Let be a parity-check matrix. Let , and be the corresponding syndrome. We consider a single BF iteration applied on , with decoding threshold for the -th bit denoted as . Let denote the vector formed by the elements in the -th row of , except for the -th one. The probability that the decoder fails to decode is upper bounded as follows

(15)
Proof:

Let be the set of all error vectors such that ; clearly

By considering that and are disjoint, as a bit may be either correct or incorrect, and by taking into account (12) and (13), we obtain

(16)

Then, the probability that decoding of fails can be upper bounded by means of the following chain of inequalities

(17)

The thesis of the theorem is finally proved by considering that and that trivially (whereas the bound in (IV-A) is not guaranteed to be smaller than or equal to ). ∎

The expression of derived above is coherent with the results given in Section III-A and, in particular, in Theorem 3. Indeed, the following corollary holds.

Corollary 2

Let us suppose that , where is the largest integer such that (2) holds. If the decoding threshold is chosen as follows

(18)

then , and, consequently, .

Proof:

By definition,

However, it follows from the definition of and from (18) that

for any choice of the indexes and, thus, . Similarly, we have

It also follows from (18) that

for any choice of the indexes , and thus . Finally, the fact that is a straightforward consequence of (15). ∎

In the particular case of regular codes, which implies to have equal decoding threshold values, noted as , assuming is odd and , the bound on provided by Theorem 5 can be rewritten as

(19)

The proof is reported in Appendix B.

Equation (19) can be used for any regular code with . For regular codes with , however, (19) can be further elaborated as discussed next.

Iv-B Regular codes with girth larger than

When , we have

(20)

In particular, for -regular codes, each row and each column of contain exactly non-zero entries. The following lemma holds.

Lemma 2

Let be a vector of weight ; then, we have , with

(21)

The following Theorem 6 specializes Theorem 5 to the case of a regular code with girth larger than , and reformulates (19) for such a case.

Theorem 6

Let be the parity-check matrix of a -regular code with girth . Let , and . We consider a single iteration of BF decoding applied to , with a unique decoding threshold . If is odd and , we have

(22)

where

Proof:

The proof is quite similar to that of Theorem 5, and its specialization to the case of regular codes (reported in Appendix B), by taking into account Lemma 2. ∎

We remark that, if the code has girth larger than , (19) is not expected to be tight, as additional constraints, which we do not investigate in this paper, should be taken into account for its validity.

In the next section we apply these results to the case of code-based cryptosystems using LDPC and MDPC codes that have or .

V Application to codes used for cryptography

There is a recent trend in post-quantum cryptography to make use of some special classes of QC-LDPC and QC-MDPC codes [24, 5, 4], since they enable the design of McEliece cryptosystem variants with very small public keys. By considering the QC nature of these codes, which are described by parity-check matrices made of circulant blocks, the bounds introduced in the previous section can be further specialized. It can be easy verified that, for these codes, the matrix is QC as well; this property can be exploited to further speed-up the computation of the error sets required to calculate the bounds.

Let us consider the case of QC-LDPC code defined by the following parity-check matrix:

(23)

where each , , is a circulant matrix of size and row/column weight . The following well-known result holds.

Lemma 3

Any circulant matrix with weight larger than has girth .

Proof:

The proof is omitted for brevity. See [25, Lemma 4.2]. ∎

It follows from Lemma 3 that a parity-check matrix as in (23) cannot have girth larger than .

In this case, the matrix can be written as

(24)

where each is a matrix; in particular, is symmetric, and this means that and are symmetric as well, while . Moreover, each block is circulant. In particular, let be the -th row of ; then, all rows such that are identical up to a quasi-cyclic shift; this means that

(25)

with . Then, from Theorem 5 we obtain

(26)

with

We remark that, in code-based cryptography, a decoding failure yields a decryption failure; thus, the FER coincides with the so-called DFR.

In order to assess the accuracy of our bound, let us consider some codes defined by parity-check matrices as in (23). The first code we consider has , and ; the second code has , and girth . We assess the decoding failure probability achieved by a single-iteration BF decoder with different threshold values through Monte Carlo simulations; for each value of , the failure probability has been estimated through the observation of wrong decoding instances. The comparison of the simulation results with our bounds is shown in Figs. 1 and 2, respectively. From the figures it results that the bound becomes tighter and tighter for decreasing values of .

Fig. 1: Comparison of the decoding failure probability estimated through Monte Carlo simulation with our bound for a code with , , , and different threshold values.
Fig. 2: Comparison of the decoding failure probability estimated through Monte Carlo simulation with our bound, for a code with , , , and different threshold values.

Actually, the values of that are required for the mentioned cryptographic systems to achieve reasonably large security levels are much smaller than those plotted in Figs. 1 and 2, and impossible to assess through Monte Carlo simulations. This makes the derived bounds particularly useful in this case. In fact, we can use (26) to design code parameters able to achieve the desired small values of without needing any simulation. To show an example, let us consider the case of a security level of binary operations, for which QC-MDPC codes with and are needed [5]. The matrices proposed in [5] have , which however leads to a decoding failure probability too large to resist reaction attacks like that proposed in [26] and achieve the desirable security condition known as IND-CCA [27]. A decoding failure probability lower than is instead required for such a purpose.

The bound provided by (26) instead allows achieving such a requirement through a classic rejection sampling approach: for each randomly generated parity-check matrix in the form (23), the bound (26) is computed and the matrix discarded if such a value is above the target . The procedure is repeated until a matrix with the desired property is obtained. In order to verify the feasibility of such an approach, let us consider different parameter sets and, for each set, generate parity-check matrices at random and compute the bound through (26). The choice of is optimized by choosing the value of for which the bound takes its smallest value.

The results of this experiment are reported in Table I. We notice that, for all tested parameter sets, a significant percentage of matrices satisfies the constraint . This fact guarantees that the time required to generate a valid matrix is limited. In other words, it is not difficult to find a matrix for which we can be sure that the desired security level is reached.

We point out that, despite the codes obtained through the above approach are significantly larger than those originally proposed, they still lead to public key sizes that are smaller than those of other competing cryptosystems, while achieving IND-CCA. For instance, considering binary Goppa codes as in the original McEliece cryptosystem, the public key size equals bits [28] for bits security, while the parameters we found lead to a reduction in the public key size by a factor ranging between and . Additionally, the parameter sets we propose represent a concrete worst case estimate of the key size increase which is needed in order to ensure IND-CCA. Indeed, we obviously expect that if more than one decoding iteration is performed, the minimum value of which is necessary to fulfill decreases, thus further reducing the key size and allowing more significant improvements with respect to other cryptosystems. However, extending the bound to the case of multiple iterations goes beyond the scope of this paper and is left for future works.

Keys achieving
279,991 45 158 out of 1,000
194,989 65 990 out of 1,000
160,499 75 792 out of 1,000
149,993 85 971 out of 1,000
138,389 95 847 out of 1,000
130,043 105 226 out of 1,000
TABLE I: Rejection sampling ratios for different parameter sets

Vi Conclusion

We have studied the error correction capability of LDPC codes under iterative decoding with the aim of finding theoretical models for its characterization without resorting to computation-intensive simulations.

Under the simplifying assumption of a single-iteration BF decoder, we have shown that a per-code upper bound on the error rate can indeed be found. Such a bound provides an important theoretical tool in those contexts where very small error rates have to be guaranteed per each specific code.

One of these contexts is that of code-based cryptography, and we have shown how our bound can be succesfully applied to such a context, allowing the design of cryptosystems based on QC-LDPC and QC-MDPC codes able to achieve strong security notions while keeping the size of the public keys smaller than that of classic systems employing algebraic codes and bounded-distance decoders.

Appendix A

In this Appendix we describe an efficient way to compute the cardinalities of the sets introduced in Definition 5. To this end, we first formalize the problem and then describe a method that, for the cases we are interested in, significantly improves upon the naive exhaustive search approach.

Problem 1

Let be a length- vector of non negative integers, and let be a set of size . Given , , compute

It is clear that an exhaustive search would require to generate all subsets of size : thus, the corresponding complexity will be equal to . As we show with combinatorial arguments, a simple algorithm can be devised, with a complexity that may be significantly lower.

In particular, we obtain the number of the sets that are complementary to those defined in Problem 1, that is

from which the value of can be straightforwardly obtained as

(27)

For a set , we denote with the vector formed by the entries of that are indexed by ; we define as the number of subsets for which the corresponding sub-vector contains elements, of which are distinct, whose sum is smaller than or equal to . We have

(28)

The values of can be easily obtained, as we show next.

First of all, let be the number of distinct values in , with being the set of such values in ascending order. In the same way, we define . As we show below, the computation of depends only on these quantities.

Let be the set of distinct values that are contained in . When , we easily have

(29)

where, as usual, if . When , some further considerations must be taken into account. For a set , let be the distinct values assumed by the entries of , and denote the corresponding multiplicities as . If