A note on reducing the computation time for minimum distance and equivalence check of binary linear codes

In this paper we show the usability of the Gray code with constant weight words for computing linear combinations of codewords. This can lead to a big improvement of the computation time for finding the minimum distance of a code. We have also considered the usefulness of combinatorial 2-(t,k,1) designs when there are memory limitations to the number of objects (linear codes in particular) that can be tested for equivalence.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

12/11/2020

Classification of 8-divisible binary linear codes with minimum distance 24

We classify 8-divisible binary linear codes with minimum distance 24 and...
11/20/2019

Parallel Implementations for Computing the Minimum Distance of a Random Linear Code on Multicomputers

The minimum distance of a linear code is a key concept in information th...
05/27/2018

Minimum Distance of New Generalizations of the Punctured Binary Reed-Muller Codes

Motivated by applications in combinatorial design theory and constructin...
11/04/2019

A Hierarchical-based Greedy Algorithm for Echelon-Ferrers Construction

Echelon-Ferrers is one of important techniques to help researchers to im...
08/18/2021

Non-uniform quantization with linear average-case computation time

A new method for binning a set of n data values into a set of m bins for...
07/15/2021

A Combinatorial Interpretation for the Shor-Laflamme Weight Enumerators of CWS Codes

We show that one of the Shor-Laflamme weight enumerators of a codeword s...
11/08/2018

Poster: Parallel Implementation of the OMNeT++ INET Framework for V2X Communications

The field of parallel network simulation frameworks is evolving at a gre...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Binary linear codes and self-dual codes in particular are extensively studied for the plethora of connections to communication, cryptography, combinatorial designs, among many. When computing self-dual codes one should be aware that with the increase of the code length the number of codes also rises exponentially.

The classification of binary self-dual codes begun in 1972 with [11] wherein all codes of lengths

are classified. Later Pless, Conway and Sloane classify all codes for

[7]. Next lengths: is due to Bilous and Van Rees [2], by Bilous [1], by Harada and Munemasa in [9]. Latest development in this area are for length in [5] and for due to Bouyukliev, Dzumalieva-Stoeva and Monev in [4].

As length of the code gets bigger the number of codewords rises exponentially and one need efficient algorithms for computing the minimum distance of a linear code, and also efficient ways to check codes for equivalence when there are memory limitations.

This paper is organized as follows: In Section 2 we outline an introduction to linear codes, self-dual codes, combinatorial designs and Gray codes. Next, in Section 3, we discuss how a reduction in computation time for minimum distance of linear code with constant-weight Gray code can be achieved. In Section 4 we explain a method for reducing the computation time for code equivalence by the use of combinatorial 2-designs. We conclude in Section 5 with a few final notes.

2 Definitions and preliminaries

Let be the finite field of elements, for a prime power . A linear code is a -dimensional subspace of . The elements of are called codewords, and the (Hamming) weight of a codeword is the number of the non-zero coordinates of . We use to denote the weight of a codeword. The minimum weight of is the minimum nonzero weight of any codeword in and the code is called an code. A matrix whose rows form a basis of is called a generator matrix of this code.

Let for be an inner product in . The dual code of an code is for all and is a linear code. In the binary case the inner product is the standard one, namely, If , is termed self-orthogonal, and if , is self-dual. We say that two binary linear codes and are equivalent if there is a permutation of coordinates which sends to . In the above definition the code equivalence is an equivalence relation is a binary relation that is reflexive, symmetric and transitive. Denote by some function that checks for equivalence all pairs of elements in both sets of linear codes and . For more information on codes we encourage the reader to [10].

When working with linear codes it is often needed for certain algorithm to pass trough all (or part) of binary vectors of given length. One way to make the generation efficient is to ensure that successive elements are generated such that they differ in a small, pre-specified way. One of the earliest examples of such a process is the Gray code generation. Introduced in a pulse code communication system in 1953

[8], Gray codes now have applications in diverse areas: analogue-to-digital conversion, coding theory, switching networks, and more. For the past 70 years Gray codes have been extensively studied and currently there are many different types of Gray code.

A binary Gray code of order is a list of all vectors of length such that exactly one bit changes from one string to the next.

A - design is a set of points together with a collection of -subsets of (named blocks) such that every -subset of is contained exactly in blocks. The block intersection numbers of are the cardinalities of the intersections of any two distinct blocks.

3 Reducing computation time for minimum distance of linear code with constant-weight Gray code

Assume we have a linear binary code and we need to find its minimum distance Denote by the generator matrix of the code with rows The obvious and direct approach is to compute all codewords of and find their weight. This means that all linear combinations of of the rows of must be computed using Algorithm 1.

for (i1 = 1; i1 <= k-t+1; i1++) {
    for (i2 = i1+1; i2 <= k-t+2; i2++) {
      for (i3 = i2+1; i3 <= k-t+3; i3++) {
        ...
          for (it = itm1+1; it <= k; it++) {body}... }}
[-8mm]
Algorithm 1 The direct approach

Then for each of the combination we need to compute cycles and essentially operations. Furthermore, in the body of this algorithm we need to find the codeword which is a linear combination of those rows of the generator matrix that are chosen for the current combination, i.e. which will be represented by “exclusive or” (xor) operations .

Our approach is to use Gray code for generating combinations in such a way that each successive combination is generated by the previous one with only two xor operations. Two xor operations are the absolute minimum since, if we have to switch from one combination of elements to another, one xor will add or remove a position making a or a combination. In [12] it was proved that the set of -vectors of weight when chained according to the ordering on the Gray code has a Hamming distance of exactly two between every pair of adjacent code vectors. Also in [12] an algorithm for generating the constant-weight code vectors on a Gray code was given. Later in [3] a more efficient recursive algorithm was introduced (Algorithm 2).

for to do for to do while do
Algorithm 2 Constant -weight Gray code [3]

What we want to do is to find in Gray code those -tuples that have the same weight , for example when for we have: 000000010011001001100111 0101010011001101111111101010101110011000 and similarly, for we have: 0000000100110010011001110101010011001101 111111101010 1011 10011000. Note that Algorithm 2 starts with the word and finishes with

Example 1: If we need to find all triples in we have a total of 20 triples. We start with 000111 and from Gray code we have the following sequence of positions to change

So the sequence of triples is as follows

Usually, when we need to compute the minimum weight of a binary code we start with the initializing then we need the pair of position that should be changed to obtain the next -tiple and so on. Since for given it is easy to find the -th -weight vector and begin with the linear combination generated by it, the algorithm can be parallelized to accommodate its use on multiple CPU cores.

4 Reducing computation time for code equivalence with combinatorial 2-designs

What can be done when there are more linear codes that the equivalence algorithm can accommodate in the allowed memory. We consider the case when all codes have the same weight enumerator and also the same order of their automorphism group. This means that all other options for reducing the number of codes we are considering are exhausted.

The question then is: How can we efficiently ensure that the algorithm will check every pair of codes. If we have times more codes that that algorithm can check, we can split this into halves of sets of codes and then check all pairs for equivalence. This is not very efficient since this has the quadratic efficiency. The more efficient way is to use - combinatorial design, which ensures that every pair of points (sets of codes in our case) appear exactly in one block and is checked for equivalence only once. Such designs exists, for example, when and we have a projective plane: is the point set of the plane and the blocks are the lines [13].

For example, consider the case of 7 sets of binary self-dual codes. If we use the standard approach we should do the tests for all pairs of sets. Now, consider using the combinatorial design approach, viz. the Fano plane (see [6]) illustrated in Fig. 1. It is well known that the Fano plane is a combinatorial --design [6]. This means that every pair of sets appear in exactly one of the 7 blocks (the blocks of Fano plane are the 6 lines and the circle), so if a code is present in different sets it is reduced to only one copy.

Figure 1: Fano plane

Using the ordering of the sets iff we can use the following sequence for automorphism testing:

where means that the interval is purged of the codes that are equivalent to codes from preceding sets, means that the interval is purged of the codes that are equivalent to codes from preceding sets, and so on. As a result the reduced inequivalent set of codes will be the union

5 Conclusions

In the present research we have considered the usability of the Gray code with constant weight words for computing linear combinations of codewords. We have shown that, in this way, a big improvement of the computation time for finding the minimum distance of a code can be achieved.

We have also considered the usefulness of combinatorial - designs when there are memory limitations to the number of objects (linear codes in particular) that can be tested for equivalence. In our example we have shown that using the Fano plane one can achieve complete classification with as much as half of the computation time needed otherwise. It remains to find efficient designs for different number of sets to be checked for equivalence.

Acknowledgement

The authors express their gratitude to prof. Borislav Panayotov for the invitation to publish in this journal. This work was supported by European Regional Development Fund and the Operational Program “Science and Education for Smart Growth” under contract UNITe No BG05M2OP 001-1.001-0004-C01 (2018-2023).

References

  • [1] R.T. Bilous (2006) Enumeration of the binary self-dual codes of length 34, Journal of Combinatorial Mathematics and Combinatorial Computing, 59, 173–211.
  • [2] R.T. Bilous, G.H.J Van Rees (2002) An enumeration of binary self-dual codes of length 32, Designs, Codes and Cryptography, 26, 61–86.
  • [3] J.R. Bitner, G. Ehrlich, E.M. Reingold (1975) Efficient Generation of the Binary Reflected Gray Code and Its Applications, Commun. ACM, 19(9), 517–521.
  • [4] I. Bouyukliev, M. Dzhumalieva-Stoeva, V. Monev (2015) Classification of Binary Self-Dual Codes of Length 40, IEEE Transactions on Information Theory, 61(8), 4253–4258.
  • [5] S. Bouyuklieva, I. Bouyukliev (2012) An Algorithm for Classification of Binary Self-Dual Codes, IEEE Transactions on Information Theory, 58(6), 3933–3940.
  • [6] C.J. Colbourn, J.H. Dinitz, Handbook of Combinatorial Designs, 2nd ed., CRC Press, 2010, ISBN 978-1-5848-8-5061.
  • [7] J.H Conway, V. Pless, N.J.A. Sloane (1992) The binary self-dual codes of length up to 32: A revised enumeration, Journal of Combinatorial Theory, Series A, 60(2), 183–195.
  • [8] F. Gray (1953) Pulse code communication, U.S. Patent 2,632,058, March 17, 1953
  • [9] M. Harada, A. Munemasa (2010) Classification of self-dual codes of length 36, Advances in Mathematics of Communications, 2, 229–235.
  • [10] W.C. Huffman, V.S. Pless (2003) Fundamentals of Error-Correcting Codes, Cambridge University Press, ISBN 978-0-5211-3-1704.
  • [11] V. Pless (1972) A classification of self-orthogonal codes over , Discrete Mathematics, 3(1-3), 209–246.
  • [12] D.T. Tang, C.N. Liu (1973) Distance-2 cyclic chaining of constant-weight codes, IEEE Transactions on Computers, 2, 176–180.
  • [13] V. Tonchev (2017) On resolvable Steiner 2-designs and maximal arcs in projective planes, Designs, Codes and Cryptography, 84(1-2), pp 165–172.