Fast Commutative Matrix Algorithm

04/16/2019
by   Andreas Rosowski, et al.
0

We show that the product of an nx3 matrix and a 3x3 matrix over a commutative ring can be computed using 6n+3 multiplications. For two 3x3 matrices this gives us an algorithm using 21 multiplications. This is an improvement with respect to Makarov algorithm using 22 multiplications[10]. We generalize our result for nx3 and 3x3 matrices and present an algorithm for computing the product of an lxn matrix and an nxm matrix over a commutative ring for odd n using n(lm+l+m-1)/2 multiplications if m is odd and using (n(lm+l+m-1)+l-1)/2 multiplications if m is even. Waksman algorithm for odd n needs (n-1)(lm+l+m-1)/2+lm multiplications[16], thus in both cases less multiplications are required by our algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

05/17/2021

Cryptanalysis of Semidirect Product Key Exchange Using Matrices Over Non-Commutative Rings

It was recently demonstrated that the Matrix Action Key Exchange (MAKE) ...
12/09/2021

Polynomial XL: A Variant of the XL Algorithm Using Macaulay Matrices over Polynomial Rings

Solving a system of m multivariate quadratic equations in n variables (t...
08/14/2019

Accuracy Controlled Structure-Preserving H^2-Matrix-Matrix Product in Linear Complexity with Change of Cluster Bases

H^2-matrix constitutes a general mathematical framework for efficient c...
11/26/2017

Computation of the Adjoint Matrix

The best method for computing the adjoint matrix of an order n matrix in...
11/04/2017

Language as a matrix product state

We propose a statistical model for natural language that begins by consi...
07/03/2019

An Encoding-Decoding algorithm based on Padovan numbers

In this paper, we propose a new of coding/decoding algorithm using Padov...
12/01/2017

A New Coding/Decoding Algorithm using Fibonacci Numbers

In this paper we present a new method of coding/decoding algorithms usin...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

In 1969 Strassen showed that the product of two matrices can be computed using arithmetic operations [15]. This work opened a new field of research and over the years better upper bounds for the exponent of matrix multiplication were published. In 1990 Coppersmith and Winograd obtained an upper bound of for the exponent [2]. For a long time this was the best result. Since 2010 further improvements were obtained in a series of papers [7, 8, 14, 17, 18]. The best result so far was published in 2014 by Le Gall who obtained an upper bound of for the exponent [8].

In this paper we first study the product of an matrix and a matrix over a commutative ring and show that we can compute the product using multiplications. The basic idea is to improve the computation of the product of a vector and a matrix over a commutative ring in the sense that we try to obtain as much as possible multiplications that contain only entries of the matrix but without using more than multiplications overall. The multiplications which contain only entries of the matrix only need to be calculated once and can therefore be reused in the matrix multiplication. In the special case we obtain an algorithm using multiplications which improves the best result so far from Makarov using multiplications [10]. Our next step is to generalize this result to the computation of the product of an matrix and an matrix over a commutative ring for odd . We show that the product can be computed using multiplications if is odd and multiplications if is even. This improves Waksman’s algorithm which requires multiplications for odd [16].

All algorithms we present in this paper do not make use of any additional multiplications with constants.

1.1 Related Work

In this section we present some related work. We start with presenting some results about multiplication of two square matrices. Note that a matrix multiplication algorithm can only be applied recursively if commutativity is not used. Since Strassen showed in 1969 that the product of two matrices can be computed using arithmetic operations [15] and since it is shown that for matrices is the optimal number of multiplications [5, 19], it is interesting to study matrices for , to obtain an even faster algorithm for matrix multiplication. For matrices multiplications would be needed to obtain an even faster algorithm than Strassen’s since . In 1976 Laderman obtained a non-commutative algorithm using only multiplications [9]. It is not known if there exists a non-commutative algorithm that uses or less multiplications. For matrices the best non-commutative result so far is multiplications by Sedoglavic [12] which is an improvement on Makarov’s algorithm for matrices [11].

Hopcroft and Musinski showed in [6] that the number of multiplications to compute the product of an matrix and an matrix is the same number that is required to compute the product of an matrix and an matrix and of an matrix and an matrix etc. This means if one computes an algorithm for the product of an matrix and an matrix using multiplications there exists a matrix product algorithm for matrices using multiplications overall. This algorithm for square matrices will then have an exponent of .

We present some examples of non-square matrix multiplication algorithms. In [4] Hopcroft and Kerr showed that the product of a matrix and a matrix can be multiplied using multiplications without using commutativity. In the case this gives an algorithm using multiplications. Combined with the results of [6] this gives an algorithm for matrices using multiplications and an exponent of . Smirnov obtained an algorithm for the product of a matrix and a matrix using multiplications[13]. By [6] this gives an algorithm for matrices using multiplications and an exponent of .

Cariow et al. developed a high-speed parallel matrix multiplier structure based on the commutative matrix algorithm using multiplications obtained by Makarov [1, 10]. We suppose that the structure could be improved by using our commutative matrix algorithm using multiplications.

In [3] Drevet et al. optimized the number of required multiplications of small matrices up to matrices. They considered non-commutative and commutative algorithms. Combined with our results for commutative rings we suppose that some results could be improved.

2 Matrix Product over a Commutative Ring

Let denote a commutative ring throughout this Section.

2.1 Product of and Matrices

Consider the vector-matrix product of an vector and a matrix over a commutative ring.

(1)

In the usual way the vector-matrix product of and would be computed as:

But it can also be computed by first computing these products:

Algorithm 1.

Input: Vector and Matrix as in (1).
Let

Output:

Theorem 1.

Let be a commutative ring, let , let be an matrix over and let be a matrix over . Then the product can be computed using multiplications.

Proof.

Consider Algorithm 1. The products , and contain only entries of the matrix . One can observe that for all the multiplications , and can be reused for the product and therefore multiplications are saved. ∎

We give an example. In the case we obtain an algorithm with multiplications for the matrix-matrix product. This algorithm needs one multiplication less than Makarov’s [10].

Corollary 1.

Let be a commutative ring and let and be matrices over as shown below. Then the product can be computed using multiplications as follows:

Hence,

2.2 General Matrix Product

Algorithm 1 from Section is the basic idea of a general algorithm for the matrix-matrix product of and matrices over a commutative ring for odd . This general algorithm makes use of Waksman algorithm [16] for even . The algorithm we present below is split into two cases. In Case is odd and in Case is even. This leads us to the following:

Theorem 2.

Let be a commutative ring, let be odd, , and let , be matrices. Then the following holds:

  • If is odd the product can be computed using multiplications.

  • If is even the product can be computed using multiplications.

Proof.

Let and be matrices as in the Theorem. Now split and in submatrices in the following way:

, with and ,

, with and .

Then . With Waksman algorithm [16] mentioned before can be computed using multiplications. Let denote the entries of and let denote the entries of and let denote the entries of . The matrix can be computed as follows.

Case 1: is odd.
For let







and for and let








It can easily be seen that multiplications are required to compute .

Thus, can be computed using multiplications.

Case 2: is even.
For let









and for and let








One can easily verify that in this case multiplications are required to compute .

Thus, can be computed using multiplications. ∎

In both cases less multiplications are required to compute than Waksman algorithm [16] for odd requires.

3 Acknowledgment

I am grateful to Michael Figelius and Markus Lohrey for helpful comments.

References

  • [1] A. Cariow, W. Sysło, G. Cariowa, M. Gliszczyński. A rationalized structure of processing unit to multiply matrices. Journal Pomiary Automatyka Kontrola, Volume R. 58, Number 7, (2012), 677–680
  • [2] D. Coppersmith, S. Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation 9, 3 (1990), 251–280
  • [3] C.-É. Drevet, M. N. Islam, É. Schost. Optimization techniques for small matrix multiplication. Theoretical Computer Science, Volume 412, Issue 22, (2011), 2219–2236
  • [4] J. E. Hopcroft, L. R. Kerr. On minimizing the number of multiplications necessary for matrix multiplication. SIAM Journal on Applied Mathematics, Volume 20, Number 1, (1971), 30–35
  • [5] J. E. Hopcroft, L. R. Kerr. Some techniques for proving certain simple programs optimal. Proc. Tenth Ann. Symposium on Switching and Automata Theory, 1969, 36–45
  • [6] J. E. Hopcroft, J. Musinski. Duality applied to the complexity of matrix multiplications and other bilinear forms.

    STOC ’73 Proceedings of the fifth annual ACM symposium on Theory of computing

    , (1973), 73–87, New York, NY, USA, ACM Press
  • [7] A. M. Davie, A. J. Stothers. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh 143A, 2013, 351–370
  • [8] F. Le Gall.

    Powers of tensors and fast matrix multiplication.

    Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), (2014), 296–303
  • [9] J. D. Laderman. A Non-Commutative Algorithm for Multiplying Matrices Using Multiplications. Bulletin of the American Mathematical Society, Volume 82, Number 1, (1976), 126–128
  • [10] O. M. Makarov. An algorithm for multiplication of matrices. Zh. Vychisl. Mat. Mat. Fiz., 26:2 (1986), 293–294
  • [11] O. M. Makarov. A non-commutative algorithm for multiplying matrices using one hundred multiplications. USSR Computational Mathematics and Mathematical Physics, Volume 27, Issue 1, (1987), 205–207
  • [12] A. Sedoglavic. A non-commutative algorithm for multiplying matrices using 99 multiplications. https://www.researchgate.net/publication/318652755
  • [13] A. V. Smirnov. The bilinear complexity and practical algorithms for matrix multiplication. Zh. Vychisl. Mat. Mat. Fiz., Volume 53, Number 12, (2013), 1970–1984
  • [14] A. J. Stothers. On the Complexity of Matrix Multiplication. PhD thesis, University of Edinburgh, 2010
  • [15] V. Strassen. Gaussian elimination is not optimal. Numerische Mathematik 13 (1969), 354–356
  • [16] A. Waksman. On Winograd’s algorithm for inner products. In IEEE Transactions on Computers, C-19(1970), 360–361.
  • [17] V. V. Williams. Multiplying matrices faster than Coppersmith-Winograd. In Proceedings of the 44th ACM Symposium on Theory of Computing, 887–898, 2012
  • [18] V. V. Williams. Multiplying matrices faster than Coppersmith-Winograd. Version available at http://theory.stanford.edu/~virgi/matrixmult-f.pdf, retrieved on August 03, 2018
  • [19] S. Winograd. On multiplication of matrices. Linear Algebra and its Applications, Volume 4, Issue 4, (1971), 381–388