 # An Explicit Construction of Gauss-Jordan Elimination Matrix

A constructive approach to get the reduced row echelon form of a given matrix A is presented. It has been shown that after the kth step of the Gauss-Jordan procedure, each entry a^k_ij(i<>j; j > k) in the new matrix A^k can always be expressed as a ratio of two determinants whose entries are from the original matrix A. The new method also gives a more general generalization of Cramer's rule than existing methods.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Gauss-Jordan elimination is a variation of standard Gaussian elimination in which a matrix is brought to reduced row echelon form rather merely to triangular form. In contrast to standard Gaussian elimination, entries above and below the diagonal have to be annihilated in the process of Gauss-Jordan elimination. It has been shown that the Gauss-Jordan elimination is considerably less efficient than Gaussian elimination with backsubstitution when solving a system of linear equations. Despite its higher cost, Gauss-Jordan elimination can be preferred in some situations. For instance, it may be implemented on parallel computers when solving systems of linear equations heath . In addition, it is well suited for computing the matrix inverse.

Applying Gauss-Jordan elimination to a given matrix we denote by the new matrix obtained after th step of Gauss-Jordan elimination. In the present paper, we will show that each entry in the matrix can always be expressed as a ratio of two determinants whose entries are from the original matrix In 2002, Gong et al. gae02 first established a generalized Cramer’s rule, which can be applied to a problem in decentralized control systems. However, their method is restricted to deal with a class of particular systems of linear equations. In lh07 , Hugo Leiva has presented another generalization of Cramer’s rule, but the given formula is somewhat complicated. Different from the two methods mentioned above, our approach can also be used to directly construct one solution of From this point of view, our method can give a generalized Cramer’s rule whose form is completely different from the existing results. We also hope that it is useful not only as a theoretical tool, but also as a practical calculation methods in the linear algebra community.

## 2 Main results

###### Lemma 2.1.

howard80 If is a square matrix and are scalars, then

 |M|∣∣ ∣∣MUVRabScd∣∣ ∣∣=∣∣ ∣ ∣ ∣∣∣∣∣MURa∣∣∣∣∣∣MVRb∣∣∣∣∣∣MUSc∣∣∣∣∣∣MVSd∣∣∣∣∣ ∣ ∣ ∣∣.

Before presenting the main result, we first offer a recursive description of Bareiss’s standard fraction free Gaussian elimination Lee95 .

 a(k)i,j=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,ka01,j⋮⋮⋮a0k,1⋯a0k,ka0k,ja0i1⋯a0ik,a0ij∣∣ ∣ ∣ ∣ ∣ ∣∣,i>k,j>k. (1)
 a(−1)0,0=1,a(0)i,j=ai,j
 a(k)i,j=a(k−1)k,ka(k−1)i,j−a(k−1)i,ka(k−1)k,ja(k−2)k−1,k−1.

In what follows, in order to simplify the discussion, we also assume that the leading principal minors of a matrix are nonzero.

###### Theorem 2.2.

Let be a matrix with entries from an arbitrary commutative ring and is defined as above. Bring to reduced row echelon form by Gauss-Jordan elimination. Then after the th elimination step, each entry in can be expressed as a ratio of two determinants whose entries are from the original matrix

###### Proof.

Consider the following three cases:

Case 1: We shall show that

 aki,j=a(k)i,ja(k−1)k,k,(i>k,j>k). (2)

By it is easy to see that the conclusion is true. To see this, let us use induction on the elimination step as follows.

(i) When it is clear that the equality holds.

(ii) Now assume that the equality is true for Then, when the elimination step is we have

This proves the equality (2).

2). Case 2: We shall claim that the below formula is true.

 aki,j=a(k−1)i,ja(k−1)k,k. (3)

It is easy to prove this, since we want according to Gauss-Jordan elimination.

3). Case 3: First, Let us construct the following determinant:

 a(k)i,j=−∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,i−1,a01,i+1⋯a01,k,a01,ja021⋯a02,i−1,a02,i+1⋯a02,k,a02,j⋮⋮⋮⋮⋮a0k,1⋯a0k,i−1,a0k,i+1⋯a0k,k,a0k,j∣∣ ∣ ∣ ∣ ∣ ∣∣k×k,ik (4)

Next, we will claim that the following two recursion formulae hold.

Case 3-1. When we have

 a(k)i,j=−a(k−1)k,ka(k−1)i,j−a(k−1)i,ka(k−1)k,ja(k−2)k−1,k−1,i≤k−2,j>k. (5)

Case 3-2. When it follows that

 a(k)i,j=a(k−2)k,ka(k−2)i,j−a(k−2)i,ka(k−2)k,ja(k−3)k−2,k−2,i=k−1,j>k. (6)

The proof of the equality Since the row index of each element in the right-hand side of is bigger than its column index, the formula is still available. By we get

 a(k−2)k,k=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−2,a01,k⋮⋮⋮a0k−2,1⋯a0k−2,k−2,a0k−2,ka0k,1⋯a0k,k−2,a0k,k∣∣ ∣ ∣ ∣ ∣ ∣∣k×k,a(k−2)k−1,j=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−2,a01,j⋮⋮⋮a0k−2,1⋯a0k−2,k−2,a0k−2,ja0k−1,1⋯a0k−1,k−2,a0k−1,j∣∣ ∣ ∣ ∣ ∣ ∣∣k×k
 a(k−2)k−1,k=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−2,a01,k⋮⋮⋮a0k−2,1⋯a0k−2,k−2,a0k−2,ka0k−1,1⋯a0k−1,k−2,a0k−1,k∣∣ ∣ ∣ ∣ ∣ ∣∣k×k,a(k−2)k,j=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−2,a01,j⋮⋮⋮a0k−2,1⋯a0k−2,k−2,a0k−2,ja0k,1⋯a0k,k−2,a0k,j∣∣ ∣ ∣ ∣ ∣ ∣∣k×k

Partition the above determinants into 4 submatrices respectively, as follows:

 M=⎛⎜ ⎜ ⎜⎝a011⋯a01,k−2⋮⋮a0k−2,1⋯a0k−2,k−2⎞⎟ ⎟ ⎟⎠a=a0k,k,b=a0k,j,c=a0k−1,k,d=a0k−1,j
 U=(a01,k,⋯,a0k,k)T,V=(a01,j,⋯,a0k−2,j)T,R=(a0k,1,⋯,a0k,k)T,S=(a0k−1,1,⋯,a0k−1,k−2)T.

In terms of Lemma2.1,

the right-hand side of

The last equality can be guaranteed by

A similar but somewhat more complicated method can be used to establish the proof of According to and we have

 a(k−2)k−1,k−1=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−2,a0k−1⋮⋮⋮a0k−2,1⋯a0k−2,k−2,a0k−2,k−1a0k−1,1⋯a0k−1,k−2,a0k−1,k−1∣∣ ∣ ∣ ∣ ∣ ∣∣,a(k−1)k,k=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−1,a01,k⋮⋮⋮a0k−1,1⋯a0k−1,k−1,a0k−1,ka0k,1⋯a0k,k−1,a0k,k∣∣ ∣ ∣ ∣ ∣ ∣∣
 a(k−1)i,j=−∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,i−1a01,i+1⋯a01,k−1a01,ja021⋯a02,i−1a02,i+1⋯a02,k−1a02,j⋮⋮⋮⋮⋮⋮⋮a0k−1,1⋯a0k−1,i−1a0k−1,i+1⋯a0k−1,k−1a0k−1,j∣∣ ∣ ∣ ∣ ∣ ∣∣(k−1)×(k−1)
 a(k−1)i,k=−∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,i−1a01,i+1⋯a01,k−1a01,ka021⋯a02,i−1a02,i+1⋯a02,k−1a02,k⋮⋮⋮⋮⋮⋮⋮a0k−1,1⋯a0k−1,i−1a0k−1,i+1⋯a0k−1,k−1a0k−1,k∣∣ ∣ ∣ ∣ ∣ ∣∣(k−1)×(k−1)
 a(k−1)k,j=∣∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,k−1,a01,j⋮⋮⋮a0k−1,1⋯a0k−1,k−1,a0k−1,ja0k,1⋯a0k,k−1,a0k,j∣∣ ∣ ∣ ∣ ∣ ∣∣k×k.

Afterwards, expand along the th column, it follows that

 a(k−2)k−1,k−1=(−1)i+1a01,iM1+⋯+(−1)k−2+ia0k−2,iMk−2+(−1)k−1+ia0k−1,iMk−1=∑k−1s=1(−1)s+ia0s,iMs.

Here, is a minor of

 (7)

Let be a square matrix whose determinant is Since the minor obtained by expanding along the th column is exactly a minor of then one always can apply elementary row operations to such that the top left corner of is exactly Here,

According to Lemma 2.1, it follows that

 =(−1)k+ik−1∑s=1(a0s,i(−1)k−s∣∣ ∣ ∣∣a(k−1)i,ka(k−1)i,j∣∣∣¯¯¯¯¯¯MsUsSscs∣∣∣∣∣∣¯¯¯¯¯¯MsVsSsds∣∣∣∣∣ ∣ ∣∣) (8)

Here, notice that

 ∣∣¯¯¯¯¯¯Ms∣∣=∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣a011⋯a01,i−1a01,i+1⋯a01,k−2a01,k−1⋮⋮⋮⋮⋮a0s−1,1⋯a0s−1,i−1a0s−1i+1⋯a0s−1,k−2a0s−1,k−1a0s+1,1⋯a0s+1,i−1a0s+1i+1⋯a0s+1,k−2a0s+1,k−1⋮⋮⋮⋮⋮a0k−1,1⋯a0k−1,i−1a0k−1i+1⋯a0k−1,k−2a0k−1,k−1∣∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣ ∣∣(k−2)×(k−2)

 Ss=(a0s,1,⋯,a0s,i−1,a0s,i+1,⋯,a0s,k−2,a0s,k−1)τ,
 as=a0s,k,bs=a0s,j,Cs=a0k,k,ds=a0k,j.

Clearly,

 ∣∣∣¯¯¯¯¯¯MsUsRsas∣∣∣=(−1)k−(s+1)(−a(k−1)i,k)=(−1)k−sa(k−1)i,k,
 ∣∣∣¯¯¯¯¯¯MsVsRsbs∣∣∣=(−1)k−(s+1)(−a(k−1)i,j)=(−1)k−sa(k−1)i,j.

Let

 ∣∣∣¯¯¯¯¯¯MsUsSscs∣∣∣=Qs,∣∣∣¯¯¯¯¯¯MsVsSsds∣∣∣=Ts

Hence,

 (???)=k−1∑s=1[(−1)i−s+1a0s,i(a(k−1)ijQs−a(k−1)i,kTs)].

And then, expand the above determinants along the th column, we have

 −(a(k−1)k,ka(k−1)i,j−a(k−1)i,ka(k−1)k,j)=−k∑s=1(a0s,i(−1)i+s∣∣ ∣∣a(k−1)i,ja(k−1)i,kAsBs∣∣ ∣∣) (9)

Thereinto, and are two minors obtained by deleting the th row, the th column from the determinants , respectively.

It is important to notice that when we have and Therefore, when we have

 ∣∣ ∣∣a(k−1)i,ka(k−1)i,jAkBk∣∣ ∣∣≡0.

So, The equality holds clearly.

Now, we consider the third case: when the below equality holds.

 aki,j=⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩a(k)i,ja(k−1)k,k,i=k−1,j>k,(−1)k−i+1a(k)i,ja(k−1)k,k,i≤k−2,j>k (10)

Let us induce on as follows.
(i). When it is very easy to verify that all the following equalities hold.

 a21,j=a(2)1,ja(1)2,2,a31,j=−a(3)1,ja(2)3,3,a32,j=a(3)2,ja(2)3,3,
 a41,j=a(4)1,ja(3)44,a42,j=−a(4)2,ja(3)44a43,j=a(4)3,ja(3)44.

(ii). Suppose that when the elimination step is still holds. Then, when the elimination step is since thus there are tow cases: and

(ii-1) When we have

 ak+1i,j=akk+1,k+1aki,j−aki,k+1akk+1,jakk+1,k+1=(−1)k−i+1a(k)k+1,k+1a(k)i,j−a(k)i,k+1a(k)k+1,ja(k−1)k,ka(k)k+1,k+1=(−1)k−ia(k+1)i,ja(k)k+1,k+1.

The lase equality is guaranteed by

(ii-2) When we get

 ak+1ij=akk+1,k+1akk−1,j−akk−1,k+1akk+1,jakk+1,k+1=a(k)k+1,k+1a(k)k−1,j−a(k)k−1,k+1a(k)k+1,ja(k−1)kka(k)k+1,k+1=−a(k+1)k−1,ja(k)k+1,k+1.

(ii-3) When it follows that

 ak+1k,j=akk+1,k+1akk,j−akk,k+1akk+1,jakk+1,k+1=a(k)k+1,k+1a(k−1)k,j−a(k−1)k,k+1a(k)k+1,ja(k−1)k,ka(k)k+1,k+1=a(k−1)k,ka(k−1)k+1,k+1−a(k−1)k+1,ka(k−1)k,k+1a(k−2)k−1,k−1a(k−1)k,j−a(k−1)k,k+1a(k−1)k,ka(k−1)k+1,j−a(k−1)k+1,ka(k−1)k,ja(k−2)k−1,k−1a(k−1)kka(k−1)k,ka(k−1)k+1