1 Introduction
In 1969 Strassen showed that the product of two matrices can be computed using arithmetic operations [15]. This work opened a new field of research and over the years better upper bounds for the exponent of matrix multiplication were published. In 1990 Coppersmith and Winograd obtained an upper bound of for the exponent [2]. For a long time this was the best result. Since 2010 further improvements were obtained in a series of papers [7, 8, 14, 17, 18]. The best result so far was published in 2014 by Le Gall who obtained an upper bound of for the exponent [8].
In this paper we first study the product of an matrix and a matrix over a commutative ring and show that we can compute the product using multiplications. The basic idea is to improve the computation of the product of a vector and a matrix over a commutative ring in the sense that we try to obtain as much as possible multiplications that contain only entries of the matrix but without using more than multiplications overall. The multiplications which contain only entries of the matrix only need to be calculated once and can therefore be reused in the matrix multiplication. In the special case we obtain an algorithm using multiplications which improves the best result so far from Makarov using multiplications [10]. Our next step is to generalize this result to the computation of the product of an matrix and an matrix over a commutative ring for odd . We show that the product can be computed using multiplications if is odd and multiplications if is even. This improves Waksman’s algorithm which requires multiplications for odd [16].
All algorithms we present in this paper do not make use of any additional multiplications with constants.
1.1 Related Work
In this section we present some related work. We start with presenting some results about multiplication of two square matrices. Note that a matrix multiplication algorithm can only be applied recursively if commutativity is not used. Since Strassen showed in 1969 that the product of two matrices can be computed using arithmetic operations [15] and since it is shown that for matrices is the optimal number of multiplications [5, 19], it is interesting to study matrices for , to obtain an even faster algorithm for matrix multiplication. For matrices multiplications would be needed to obtain an even faster algorithm than Strassen’s since . In 1976 Laderman obtained a noncommutative algorithm using only multiplications [9]. It is not known if there exists a noncommutative algorithm that uses or less multiplications. For matrices the best noncommutative result so far is multiplications by Sedoglavic [12] which is an improvement on Makarov’s algorithm for matrices [11].
Hopcroft and Musinski showed in [6] that the number of multiplications to compute the product of an matrix and an matrix is the same number that is required to compute the product of an matrix and an matrix and of an matrix and an matrix etc. This means if one computes an algorithm for the product of an matrix and an matrix using multiplications there exists a matrix product algorithm for matrices using multiplications overall. This algorithm for square matrices will then have an exponent of .
We present some examples of nonsquare matrix multiplication algorithms. In [4] Hopcroft and Kerr showed that the product of a matrix and a matrix can be multiplied using multiplications without using commutativity. In the case this gives an algorithm using multiplications. Combined with the results of [6] this gives an algorithm for matrices using multiplications and an exponent of . Smirnov obtained an algorithm for the product of a matrix and a matrix using multiplications[13]. By [6] this gives an algorithm for matrices using multiplications and an exponent of .
Cariow et al. developed a highspeed parallel matrix multiplier structure based on the commutative matrix algorithm using multiplications obtained by Makarov [1, 10]. We suppose that the structure could be improved by using our commutative matrix algorithm using multiplications.
In [3] Drevet et al. optimized the number of required multiplications of small matrices up to matrices. They considered noncommutative and commutative algorithms. Combined with our results for commutative rings we suppose that some results could be improved.
2 Matrix Product over a Commutative Ring
Let denote a commutative ring throughout this Section.
2.1 Product of and Matrices
Consider the vectormatrix product of an vector and a matrix over a commutative ring.
(1) 
In the usual way the vectormatrix product of and would be computed as:
But it can also be computed by first computing these products:
Algorithm 1.
Theorem 1.
Let be a commutative ring, let , let be an matrix over and let be a matrix over . Then the product can be computed using multiplications.
Proof.
Consider Algorithm 1. The products , and contain only entries of the matrix . One can observe that for all the multiplications , and can be reused for the product and therefore multiplications are saved. ∎
We give an example. In the case we obtain an algorithm with multiplications for the matrixmatrix product. This algorithm needs one multiplication less than Makarov’s [10].
Corollary 1.
Let be a commutative ring and let and be matrices over as shown below. Then the product can be computed using multiplications as follows:
Hence,
2.2 General Matrix Product
Algorithm 1 from Section is the basic idea of a general algorithm for the matrixmatrix product of and matrices over a commutative ring for odd . This general algorithm makes use of Waksman algorithm [16] for even . The algorithm we present below is split into two cases. In Case is odd and in Case is even. This leads us to the following:
Theorem 2.
Let be a commutative ring, let be odd, , and let , be matrices. Then the following holds:

If is odd the product can be computed using multiplications.

If is even the product can be computed using multiplications.
Proof.
Let and be matrices as in the Theorem. Now split and in submatrices in the following way:
,
with and ,
,
with and .
Then . With Waksman algorithm [16] mentioned before can be computed using multiplications. Let denote the entries of and let denote the entries of and let denote the entries of . The matrix can be computed as follows.
Case 1: is odd.
For let
and for and let
It can easily be seen that multiplications are required to compute .
Thus, can be computed using multiplications.
Case 2: is even.
For let
and for and let
One can easily verify that in this case multiplications are required to compute .
Thus, can be computed using multiplications. ∎
In both cases less multiplications are required to compute than Waksman algorithm [16] for odd requires.
3 Acknowledgment
I am grateful to Michael Figelius and Markus Lohrey for helpful comments.
References
 [1] A. Cariow, W. Sysło, G. Cariowa, M. Gliszczyński. A rationalized structure of processing unit to multiply matrices. Journal Pomiary Automatyka Kontrola, Volume R. 58, Number 7, (2012), 677–680
 [2] D. Coppersmith, S. Winograd. Matrix multiplication via arithmetic progressions. Journal of Symbolic Computation 9, 3 (1990), 251–280
 [3] C.É. Drevet, M. N. Islam, É. Schost. Optimization techniques for small matrix multiplication. Theoretical Computer Science, Volume 412, Issue 22, (2011), 2219–2236
 [4] J. E. Hopcroft, L. R. Kerr. On minimizing the number of multiplications necessary for matrix multiplication. SIAM Journal on Applied Mathematics, Volume 20, Number 1, (1971), 30–35
 [5] J. E. Hopcroft, L. R. Kerr. Some techniques for proving certain simple programs optimal. Proc. Tenth Ann. Symposium on Switching and Automata Theory, 1969, 36–45

[6]
J. E. Hopcroft, J. Musinski.
Duality applied to the complexity of matrix multiplications and other bilinear forms.
STOC ’73 Proceedings of the fifth annual ACM symposium on Theory of computing
, (1973), 73–87, New York, NY, USA, ACM Press  [7] A. M. Davie, A. J. Stothers. Improved bound for complexity of matrix multiplication. Proceedings of the Royal Society of Edinburgh 143A, 2013, 351–370

[8]
F. Le Gall.
Powers of tensors and fast matrix multiplication.
Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation (ISSAC 2014), (2014), 296–303  [9] J. D. Laderman. A NonCommutative Algorithm for Multiplying Matrices Using Multiplications. Bulletin of the American Mathematical Society, Volume 82, Number 1, (1976), 126–128
 [10] O. M. Makarov. An algorithm for multiplication of matrices. Zh. Vychisl. Mat. Mat. Fiz., 26:2 (1986), 293–294
 [11] O. M. Makarov. A noncommutative algorithm for multiplying matrices using one hundred multiplications. USSR Computational Mathematics and Mathematical Physics, Volume 27, Issue 1, (1987), 205–207
 [12] A. Sedoglavic. A noncommutative algorithm for multiplying matrices using 99 multiplications. https://www.researchgate.net/publication/318652755
 [13] A. V. Smirnov. The bilinear complexity and practical algorithms for matrix multiplication. Zh. Vychisl. Mat. Mat. Fiz., Volume 53, Number 12, (2013), 1970–1984
 [14] A. J. Stothers. On the Complexity of Matrix Multiplication. PhD thesis, University of Edinburgh, 2010
 [15] V. Strassen. Gaussian elimination is not optimal. Numerische Mathematik 13 (1969), 354–356
 [16] A. Waksman. On Winograd’s algorithm for inner products. In IEEE Transactions on Computers, C19(1970), 360–361.
 [17] V. V. Williams. Multiplying matrices faster than CoppersmithWinograd. In Proceedings of the 44th ACM Symposium on Theory of Computing, 887–898, 2012
 [18] V. V. Williams. Multiplying matrices faster than CoppersmithWinograd. Version available at http://theory.stanford.edu/~virgi/matrixmultf.pdf, retrieved on August 03, 2018
 [19] S. Winograd. On multiplication of matrices. Linear Algebra and its Applications, Volume 4, Issue 4, (1971), 381–388
Comments
There are no comments yet.