# Inverse of a Special Matrix and Application

The matrix inversion is an interesting topic in algebra mathematics. However, to determine an inverse matrix from a given matrix is required many computation tools and time resource if the size of matrix is huge. In this paper, we have shown an inverse closed form for an interesting matrix which has much applications in communication system. Base on this inverse closed form, the channel capacity closed form of a communication system can be determined via the error rate parameter alpha

There are no comments yet.

## Authors

• 9 publications
• ### On Bounds and Closed Form Expressions for Capacities of Discrete Memoryless Channels with Invertible Positive Matrices

While capacities of discrete memoryless channels are well studied, it is...
01/07/2020 ∙ by Thuan Nguyen, et al. ∙ 0

• ### Closed-form solutions for the inverse kinematics of serial robots using conformal geometric algebra

This work addresses the inverse kinematics of serial robots using confor...
09/25/2021 ∙ by Isiah Zaplana, et al. ∙ 0

• ### Binomial Determinants for Tiling Problems Yield to the Holonomic Ansatz

We present and prove closed form expressions for some families of binomi...
05/18/2021 ∙ by Hao Du, et al. ∙ 0

• ### On the Closed Form Expression of Elementary Symmetric Polynomials and the Inverse of Vandermonde Matrix

Inverse Vandermonde matrix calculation is a long-standing problem to sol...
09/18/2019 ∙ by Mahdi S. Hosseini, et al. ∙ 0

• ### Multiple configurations for puncturing robot positioning

The paper presents the Inverse Kinematics (IK) close form derivation ste...
03/06/2019 ∙ by Omar Abdelaziz, et al. ∙ 0

• ### Symbolic spectral decomposition of 3x3 matrices

Spectral decomposition of matrices is a recurring and important task in ...
11/03/2021 ∙ by Michal Habera, et al. ∙ 0

• ### Generalized Gapped-kmer Filters for Robust Frequency Estimation

In this paper, we study the generalized gapped k-mer filters and derive ...
02/21/2021 ∙ by Morteza Mohammad-noori, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## I Matrix Construction

In Wireless communication system or Free Space Optical communication system, due to the shadow effect or the turbulent of environment, the channel conditions can be flipped from “good” to “bad” or “bad” to “good” state such as Markov model after the transmission time

[mcdougall2003sensitivity] [wang1995finite]. For simple intuition, in “bad” channel, a signal will be transmitted incorrectly and in “good” channel, the signal is received perfectly. Suppose a system has total channels, the “good” channel is noted as “1” and “bad” channel is “0”, respectively, the transmission time between transmitter and receiver is

, the probability the channel is flipped after the transmission time

is . We note that if the system using a binary code such as On-Off Keying in Free Space Optical communication, then the flipped probability is equivalent to the error rate.

Consider a simple case for , suppose that at the beginning, both channel is “good” channel, the probability of system has both of channels are “good” after transmission time , for example, is . Let call is the probability of system from the state has “good” channels and “bad” channels transfers to state has “good” and “bad” channels. Obviously that and . For example, the transition matrix and for and are constructed respectively as follows:

These transition matrices are obviously size since the number of “good” channels can achieve discrete values from . Moreover, these class matrices have several interesting properties: (1) all entries in matrix can be determined by Proposition 1; (2) the inverse of matrix is given by Proposition 2. Moreover, this matrices are obviously central symmetric matrix.

###### Proposition 1.

For channels system, the transition matrix has size and all entries in row column will be established by

 Anij=s=min(n+1−j,i−1)∑s=max(i−j,0)(j−i+sn+1−i)(si−1)αj−i+2s(1−α)n−(j−i+2s)
###### Proof.

From the definition, is the probability from state has “good” channels or bit “1” transfer to state has “good” channels or bit “1”. Therefore, suppose is the number channels in “good” channels that is flipped to “bad” channels after the transmission time and . Thus, to maintain “good” channels after the time , the number of “bad” channels in “bad” channels must be flipped to “good” channels is:

 (j−1)−((i−1)−s)=j−i+s

Therefore, the total number of channels are flipped their state after transmission time is:

 s+(j−i+s)=j−i+2s

and the total number of channels that preserves their state after transmission time is . However, . Similarly, the number of “bad” channels in “bad” channels must be flipped to “good” channels should be in . Hence:

 {maxs=min(n+1−j;i−1)mins=max(0;i−j)

Therefore, can be determined by below form:

 Anij=s=min(n+1−j,i−1)∑s=max(i−j,0)(j−i+sn+1−i)(si−1)αj−i+2s(1−α)n−(j−i+2s)

###### Proposition 2.

All the entries of inverse matrix given in Proposition 1 can be determined via original transition matrix for .

 An−1ij=(−1)i+j(1−2α)nAnij

Due to the pages limitation, we will show the detailed proof at the end of this paper. To illustrate our result, an example of the inverse matrix are shown as follows:

Next, base on the existence of inverse matrix closed form, we will show that a capacity closed form for a discrete memory-less channel can be established. We note that in [cover2012elements], the authors said that haven’t closed form for channel capacity problem. However, with our approach, the closed form can be established for a wide range of channel with error rate is small.

## Ii Optimize system capacity

A discrete memoryless channel is characterized by a channel matrix with and representing the numbers of distinct input (transmitted) symbols , , and output (received) symbols , , respectively. The matrix entry represents the conditional probability that given a symbol is transmitted, the symbol is received. Let

be the input probability mass vector, where

denotes the probability of transmitting symbol , then the probability mass vector of output symbols , where denotes the probability of receiving symbol . For simplicity, we only consider the case such that the number of transmitted input patterns is equal the number of received input patterns. The mutual information between input and output symbolsis:

 I(X;Y)=H(Y)−H(Y|X),

where

 H(Y) = −j=n∑j=1qjlogqj H(Y|X) = m∑i=1n∑j=1piAijlogAij.

Thus, the mutual information function can be written as:

 I(X;Y)=−j=n∑j=1(ATp)jlog(ATp)j+m∑i=1n∑j=1piAijlogAij,

where denotes the th component of the vector . The capacity of a discrete memoryless channel associated with a channel matrix puts a theoretical maximum rate that information can be transmitted over the channel [cover2012elements]. It is defined as:

 C=maxpI(X;Y). (1)

Therefore, finding the channel capacity is to find an optimal input probability mass vector such that the mutual information between the input and output symbols is maximized. For a given channel matrix , is a concave function in [cover2012elements]. Therefore, maximizing is equivalent to minimizing , and the capacity problem can be cast as the following convex problem:

Minimize:

 n∑j=1(ATp)jlog(ATp)j−m∑i=1n∑j=1piAijlogAij

Subject to:

 {pi⪰01Tp=1

Optimal numerical values of can be found efficiently using various algorithms such as gradient methods [grant2008cvx] [boyd2004convex]. However, in this paper, we try to figure out the closed form for optimal distribution via KKT condition. The KKT conditions state that for the following canonical optimization problem:

Problem Miminize:
Subject to:

 gi(x)≤0,i=1,2,…n,
 hj(x)=0,j=1,2,…,m,

construct the Lagrangian function:

 L(x,λ,ν)=f(x)+n∑i=1λigi(x)+m∑j=1νjhj(x), (2)

then for , , the optimal point must satisfy:

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩gi(x∗)≤0,hj(x∗)=0,dL(x,λ,ν)dx|x=x∗,λ=λ∗,ν=ν∗=0,λ∗ix∗i=0,λ∗i≥0. (3)

Our transition matrix that is already established in previous part can represent as a channel matrix. In the optical transmission, for example, the transmission bits are denoted by the different levels of energy, for example, in On-Off Keying code bit “1” and “0” is represented by high and low power level. This energy is received by a photo diode and converse directly to the voltage for example. However, these photo diode work base on the aggregate property when collecting all the incident energy, that said, if two channels transmit a bit “1” then the photo diode will receive the same energy “2” even though this energy comes from a different pair of channels. Therefore, the received signal is completely dependent to the number of bits “1” in transmission side. Hence, in receiver side, the photo diode will recognize states . From this property, the transition matrix is the previous section is exactly the system channel matrix. The channel capacity of system, therefore, is determined as an optimization problem in (1).

Next, we will show that the above optimization problem can be solved efficiently by KKT condition. We note that our method can establish the closed form for general channel matrix and then the results are applied to special matrix . First, we try to optimize directly with input distribution , however, the KKT condition for input distribution is too complicated to construct the first derivation. On the other hand, base on the existence of inverse channel matrix, the output variable is more suitable to work with KKT condition since. Due to , the Lagrange function from (2) with output variable is:

 L(qj,λj,νj)=I(X,Y)+j=n∑j=1qjλj+ν(j=n∑j=1qj−1)

Using KKT conditions, at optimal point , , :

 ⎧⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎨⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪⎩q∗j≥0∑j=nj=1q∗j=1ν∗−λ∗j−dI(X,Y)dq∗j=0λ∗j≥0λ∗jq∗j=0

Because and , so always exist . From with , we can see clearly that with or with .

Therefore with fifth condition, with . Then, we have simplified KKT conditions:

 ⎧⎪ ⎪⎨⎪ ⎪⎩∑j=nj=1q∗j=1ν∗−dI(X,Y)dq∗j=0

The derivations are determined by:

 dI(X,Y)dqj=i=n∑i=1A−1jij=n∑j=1AijlogAij−(1+logqj)

Let call:

 i=n∑i=1A−1jij=n∑j=1AijlogAij=Kj

Next, using derivation of I(X,Y) at and last condition:

 ν∗=Kj−(1+logqj∗)

Hence:

 q∗j=2Kj−ν∗−1

Next, using first simplified condition, we have the sum of all output states is 1.

 j=n∑j=12Kj−ν∗−1=1
 2ν∗=j=n∑j=12Kj−1

Therefore, can be figured out by:

 ν∗=logj=n∑j=12Kj−1

From the second simplified condition, we can compute :

 q∗j=2Kj−ν∗−1

And finally:

 pT∗=qT∗A−1ij

Due to the channel matrix is a closed form of , the optimal input vector and output vector also is a function of . However, we note that since the KKT condition works directly to the output variable , the optimal input can be invalid or . In next step, our simulations shown that for and , both output and input vector are valid. That said, our approach will be worked with a good system where the error probability is small. In case of the invalid optimal input vector, the upper bound of channel capacity, of course, will be established.

## Iii Conclusion

In this paper, our contributions are twofold: (1) establish an inverse closed form for a class of channel matrix based on the error probability ; (2) figure out the closed form for channel matrix with small error rate and determine the upper bound system capacity for a high error rate channel.