Gauss-Jordan elimination is a variation of standard Gaussian elimination in which a matrix is brought to reduced row echelon form rather merely to triangular form. In contrast to standard Gaussian elimination, entries above and below the diagonal have to be annihilated in the process of Gauss-Jordan elimination. It has been shown that the Gauss-Jordan elimination is considerably less efficient than Gaussian elimination with backsubstitution when solving a system of linear equations. Despite its higher cost, Gauss-Jordan elimination can be preferred in some situations. For instance, it may be implemented on parallel computers when solving systems of linear equations heath . In addition, it is well suited for computing the matrix inverse.
Applying Gauss-Jordan elimination to a given matrix we denote by the new matrix obtained after th step of Gauss-Jordan elimination. In the present paper, we will show that each entry in the matrix can always be expressed as a ratio of two determinants whose entries are from the original matrix In 2002, Gong et al. gae02 first established a generalized Cramer’s rule, which can be applied to a problem in decentralized control systems. However, their method is restricted to deal with a class of particular systems of linear equations. In lh07 , Hugo Leiva has presented another generalization of Cramer’s rule, but the given formula is somewhat complicated. Different from the two methods mentioned above, our approach can also be used to directly construct one solution of From this point of view, our method can give a generalized Cramer’s rule whose form is completely different from the existing results. We also hope that it is useful not only as a theoretical tool, but also as a practical calculation methods in the linear algebra community.
2 Main results
howard80 If is a square matrix and are scalars, then
Before presenting the main result, we first offer a recursive description of Bareiss’s standard fraction free Gaussian elimination Lee95 .
In what follows, in order to simplify the discussion, we also assume that the leading principal minors of a matrix are nonzero.
Let be a matrix with entries from an arbitrary commutative ring and is defined as above. Bring to reduced row echelon form by Gauss-Jordan elimination. Then after the th elimination step, each entry in can be expressed as a ratio of two determinants whose entries are from the original matrix
Consider the following three cases:
Case 1: We shall show that
By it is easy to see that the conclusion is true. To see this, let us use induction on the elimination step as follows.
(i) When it is clear that the equality holds.
(ii) Now assume that the equality is true for Then, when the elimination step is we have
This proves the equality (2).
2). Case 2: We shall claim that the below formula is true.
It is easy to prove this, since we want according to Gauss-Jordan elimination.
3). Case 3: First, Let us construct the following determinant:
Next, we will claim that the following two recursion formulae hold.
Case 3-1. When we have
Case 3-2. When it follows that
The proof of the equality Since the row index of each element in the right-hand side of is bigger than its column index, the formula is still available. By we get
Partition the above determinants into 4 submatrices respectively, as follows:
In terms of Lemma2.1,
the right-hand side of
The last equality can be guaranteed by
A similar but somewhat more complicated method can be used to establish the proof of According to and we have
Afterwards, expand along the th column, it follows that
Here, is a minor of
Let be a square matrix whose determinant is Since the minor obtained by expanding along the th column is exactly a minor of then one always can apply elementary row operations to such that the top left corner of is exactly Here,
According to Lemma 2.1, it follows that
Here, notice that
And then, expand the above determinants along the th column, we have
Thereinto, and are two minors obtained by deleting the th row, the th column from the determinants , respectively.
It is important to notice that when we have and Therefore, when we have
So, The equality holds clearly.
Now, we consider the third case: when the below equality holds.
Let us induce on as follows.
(i). When it is very easy to verify that all the following equalities hold.
(ii). Suppose that when the elimination step is still holds. Then, when the elimination step is since thus there are tow cases: and
(ii-1) When we have
The lase equality is guaranteed by
(ii-2) When we get
(ii-3) When it follows that