# On Skew-Symmetric Games

By resorting to the vector space structure of finite games, skew-symmetric games (SSGs) are proposed and investigated as a natural subspace of finite games. First of all, for two player games, it is shown that the skew-symmetric games form an orthogonal complement of the symmetric games. Then for a general SSG its linear representation is given, which can be used to verify whether a finite game is skew-symmetric. Furthermore, some properties of SSGs are also obtained in the light of its vector subspace structure. Finally, a symmetry-based decomposition of finite games is proposed, which consists of three mutually orthogonal subspaces: symmetric subspace, skew-symmetric subspace and asymmetric subspace. An illustrative example is presented to demonstrate this decomposition.

## Authors

• 1 publication
• 2 publications
• ### A thick-restart Lanczos type method for Hermitian J-symmetric eigenvalue problems

A thick-restart Lanczos type algorithm is proposed for Hermitian J-symme...
01/21/2020 ∙ by Ken-Ichi Ishikawa, et al. ∙ 0

• ### Symmetric Decomposition of Asymmetric Games

We introduce new theoretical insights into two-population asymmetric gam...
11/14/2017 ∙ by Karl Tuyls, et al. ∙ 0

• ### Some Structure Properties of Finite Normal-Form Games

Game theory provides a mathematical framework for analysing strategic si...
05/02/2019 ∙ by Nicholas Ham, et al. ∙ 0

• ### Decomposition of games: some strategic considerations

Candogan et al. (2011) provide an orthogonal direct-sum decomposition of...
01/18/2019 ∙ by Joseph Abdou, et al. ∙ 0

• ### Symmetric binary Steinhaus triangles and parity-regular Steinhaus graphs

A binary Steinhaus triangle is a triangle of zeroes and ones that points...
10/14/2019 ∙ by Jonathan Chappelon, et al. ∙ 0

• ### Hodge decomposition and the Shapley value of a cooperative game

We show that a cooperative game may be decomposed into a sum of componen...
09/25/2017 ∙ by Ari Stern, et al. ∙ 0

• ### Greedy Approaches to Symmetric Orthogonal Tensor Decomposition

Finding the symmetric and orthogonal decomposition (SOD) of a tensor is ...
06/05/2017 ∙ by Cun Mu, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

The vector space structure of finite games is firstly proposed by can11 . Then it has been merged as an isomorphism onto a finite Euclidean space che16 . As a result, the decomposition of the vector space of finite games becomes a natural and interesting topic. Since the potential game is theoretically important and practically useful, a decomposition based on potential games and harmonic games has been investigated can11 ; che16 . Symmetric game is another kind of interesting games alo13

, which may provide useful properties for applications. Hence symmetry-based decomposition is another interesting topic. Decomposition may help to classify games and to reveal properties of each kind of finite games.

To provide a clear picture of the decompositions we first give a survey for the vector space structure of finite games.

###### Definition 1

A (normal form non-cooperative) finite game consists of three ingredients:

1. Player, : which means there are players;

2. Profile, : where is the set of strategies (actions) of player ;

3. Payoff, : where is the payoff function of player .

Assume , i.e., is the -th strategy of player . Instead of , we denote this strategy by , which is the

-th column of identity matrix

. This expression is called the vector form of strategies. Since each payoff function is a pseudo-logical function, there is a unique row vector , where , such that (when the vector form is adopted) the payoffs can be expressed as

 ci(x1,⋯,xn)=Vci⋉nj=1xj,i=1,⋯,n, (1)

where is the semi-tensor product of matrices which is defined in next section.

The set of finite games with , , , is denoted by . Now it is clear that a game is uniquely determined by

 VG:=[Vc1,Vc2,⋯,Vcn], (2)

which is called the structure vector of . Hence has a natural vector space structure as .

The potential-based decomposition of finite games was firstly proposed by Candogan and Menache, using the knowledge of algebraic topology and the Helmholtz decomposition theory from graph theory can11 . The decomposition is shown in (3), where , , and are pure potential games, non-strategic games, and pure harmonic games respectively. Unfortunately, the inner product used there is not the standard one in .

 G[n;k1,⋯,kn]=to0.0pt$P⊕NPotentialgames$P⊕HarmonicgamesN⊕H. (3)

The vector space structure of potential games has been clearly revealed in che14 by providing a basis of potential subspace. Using this result, che16 re-obtained the decomposition (3) with standard inner product through a straightforward linear algebraic computation.

The concept of symmetric game was firstly proposed by Nash nas51 . It becomes an important topic since then alo13 ; bra09 ; cao16 . We also refer to chepr for a vector space approach to symmetric games. The symmetry-based decompositions have been discussed recently as for four strategy matrix games sza15 , as well as for general two-player games sza16 .

In this paper, the skew-symmetric game is proposed. First, we show that two-player games have an orthogonal decomposition as in (4). That is, the vector subspace of skew-symmetric games is the orthogonal complement of the subspace of symmetric games:

 G[2;κ]=S[2;κ]⊕K[2;κ], (4)

where , and are symmetric and skew-symmetric subspaces of respectively.

Furthermore, certain properties of skew-symmetric games are also revealed. The bases of symmetric and skew-symmetric games are constructed. Due to their orthogonality, following conclusions about the decomposition of finite games are obtained:

• if then

 G[n;κ]=S[n;κ]⊕E[n;κ]; (5)
• if then

 G[n;κ]=S[n;κ]⊕K[n;κ]⊕E[n;κ], (6)

where is the set of asymmetric games.

Finally, for statement ease, we give some notations:

1. : the set of real matrices.

2. : the set of Boolean matrices, (: the set of dimensional Boolean vectors.)

3. (): the set of columns (rows) of . (): the -th column (row) of .

4. .

5. : the -th column of the identity matrix .

6. .

7. .

8. : a matrix with zero entries.

9. A matrix is called a logical matrix if the columns of are of the form . That is, . Denote by the set of logical matrices.

10. If , by definition it can be expressed as . For the sake of compactness, it is briefly denoted as .

11. : -th order symmetric group.

12. : the standard inner product in .

13. : -th order Boolean orthogonal group.

14. (or ): general linear group.

15. : the set of finite games with , .

16. : , .

17. : the set of (ordinary) symmetric games. Denote by a symmetric game.

18. : the set of skew-symmetric games. Denote by a skew-symmetric game.

19. : the set of asymmetric games. Denote by an asymmetric game.

The rest of this paper is organized as follows: In section 2, a brief review of semi-tensor product of matrices is given. After introducing a symmetry-based classification of finite games, Section 3 presents mainly two results: (1) the orthogonal decomposition of two player games; (2) the linear representation of skew-symmetric games. Some properties of skew-symmetric games are discussed in Section 4. A basis of is also constructed. Section 5 is devoted to verifying the orthogonality of symmetric and skew-symmetric games. Section 6 provides a symmetry-based orthogonal decomposition of finite games. In Section 7, an illustrative example is given to demonstrate this decomposition. Section 8 is a brief conclusion.

## 2 Preliminaries

### 2.1 Semi-tensor Product of Matrices

In this section, we give a brief survey on semi-tensor product (STP) of matrices. It is the main tool for our approach. We refer to che11 ; che12 for details. The STP of matrices is defined as follows:

###### Definition 2

Let The STP of and is defined as

 M⋉N:=(M⊗It/n)(N⊗It/p)∈Mmt/n×qt/p, (7)

where is the least common multiple of and and is the Kronecker product.

STP is a generalization of conventional matrix product, and all computational properties of the conventional matrix product remain available. It has been successfully used for studying logical (control) systems lht16 ; zgd17 . Throughout this paper, the default matrix product is STP. Hence, the product of two arbitrary matrices is well defined, and the symbol is mostly omitted.

First, we give some basic properties of STP, which will be used in the sequel.

###### Proposition 3
1. (Associative Law:)

 A⋉(B⋉C)=(A⋉B)⋉C. (8)
2. (Distributive Law:)

 (A+B)⋉C=A⋉C+B⋉C; (9)
 C⋉(A+B)=C⋉A+C⋉B. (10)
###### Proposition 4

Let be a dimensional column vector, and a matrix. Then

 X⋉M=(It⊗M)⋉X. (11)
###### Definition 5

A swap matrix is defined as

 W[m,n]:=[δ1nδ1m,⋯,δnnδ1m;δ1nδ2m,⋯,δnnδ2m,⋯,δ1nδmm,⋯,δnnδmm]. (12)

The basic function of a swap matrix is to swap two vectors.

###### Proposition 6

Let and be two column vectors. Then

 W[m,n]XY=YX. (13)

The swap matrix is an orthogonal matrix:

###### Proposition 7

is an orthogonal matrix. Precisely,

 W−1[m,n]=WT[m,n]=W[n,m]. (14)

Given a matrix , its row stacking form is

 VR(A):=(a11,a12,⋯,a1n;⋯;am1,am2,⋯,amn)T;

its column stacking form is

 VC(A):=(a11,a21,⋯,am1;⋯;a1n,a2n,⋯,amn)T.

Using Propositions 6 and 7 yields

###### Proposition 8

Given a matrix . Then

 VR(A)=W[m,n]VC(A); VC(A)=W[n,m]VR(A); (15)

and

 VR(AT)=VC(A);VC(AT)=VR(A). (16)

Next, we consider the matrix expression of logical relations. Identifying

 1∼δ12,0∼δ22,

then a logical variable can be expressed in vector form as

 x∼(x1−x),

which is called the vector form expression of

A mapping is called a pseudo-Boolean function.

###### Proposition 9

Given a pseudo-Boolean function , there exists a unique row vector , called the structure vector of , such that (in vector form)

 f(x1,⋯,xn)=Vf⋉ni=1xi. (17)
###### Remark 10

In previous proposition, if is replaced by , , then the function is called a pseudo-logical function and the expression (17) remains available with an obvious modification that and .

###### Definition 11

Let and . Then the Khatri-Rao product of and , denoted by , is defined as follows:

 A∗B=[Col1(A)⋉Col1(B),Col2(A)⋉Col2(B),⋯,Coln(A)⋉Coln(B)]∈Mpq×n.
###### Proposition 12

Assume

 u=M⋉ni=1xi∈Δp, v=N⋉ni=1xi∈Δq,

where Then

 uv=(M∗N)⋉ni=1xi∈Δpq.

## 3 Symmetric and Skew-symmetric Games

### 3.1 Classification of Finite Games

This subsection considers the symmetry-based classification of finite games. First, we give a rigorous definition for symmetric and skew-symmetric games.

###### Definition 13

Let .

1. If for any , we have

 ci(x1,⋯,xn)=cσ(i)(xσ−1(1),xσ−1(2),⋯,xσ−1(n)), (18)

where then is called a symmetric game. Denote by the set of symmetric games in .

2. If for any , we have

 ci(x1,⋯,xn)=sign(σ)cσ(i)(xσ−1(1),xσ−1(2)⋯,xσ−1(n)), (19)

where then is called a skew-symmetric game. Denote by the set of skew-symmetric games in .

It is well known that is a vector space can11 ; che16 . It is easy to figure out that both and are subspaces of . Hence, they are also two subspaces of .

Then, we can define the following asymmetric subspace.

###### Definition 14

is called an asymmetric game if its structure vector

 VG∈[S[n;κ]⋃K[n;κ]]⊥. (20)

The set of asymmetric games is denoted by which is also a subspace of .

###### Example 15

Consider A straightforward computation shows the following result:

• If , then its payoff functions are as in Table 1, where

• If then its payoff functions are as in Table 2, where

• If then its payoff functions are as in Table 3, where and

It follows that , , and . Moreover, it is ready to verify the orthogonality:

 VS[3,2]VTK[3,2]=0,VE[3,2]VTS[3,2]=0,VE[3,2]VTK[3,2]=0.

We conclude that

 G[3,2]=S[3,2]⊕K[3,2]⊕E[3,2],

which verifies (6).

### 3.2 Two Player Games

In this subsection we consider . Let and be the payoff matrices of player and player respectively. According to Definition 13, it is easy to verify the following fact:

###### Lemma 16
1. is a symmetric game, if and only if,

 A=BT.
2. is a skew-symmetric game, if and only if,

 A=−BT.

Note that for we have its structure vector as

 VG=[VTR(A),VTR(B)],

where is the row stacking form of matrix .

According to Propositions 7 and 8, we have the following result:

###### Lemma 17
1. is a symmetric game, if and only if,

 VG=[Vc1,Vc1W[κ,κ]]. (21)
2. is a skew-symmetric game, if and only if,

 VG=[Vc1,−Vc1W[κ,κ]]. (22)

According to Lemma 17, the following result can be obtained via a straightforward computation.

###### Theorem 18

Let . Then can be orthogonally decomposed to

 G=GS⊕GK, (23)

where and .

Proof. Denote the structure vector of as . We construct a symmetric game by setting

 VGS=[S,SW[κ,κ]];

and a skew-symmetric game by

 VGK=[K,−KW[κ,κ]],

where

 S=Vc1+Vc2W[κ,κ]2;K=Vc1−Vc2W[κ,κ]2.

Then, it is ready to verify that

1. ;

2. .

The conclusion follows.

Note that Theorem 18 implies the decomposition (4).

###### Example 19

Consider .

1. is symmetric, if and only if, its payoff functions are as in Table 4.

2. is skew-symmetric, if and only if, its payoff functions are as in Table 5.

3. Let with its payoff bi-matrix as in Table 6.

Then it has an orthogonal decomposition into and with their payoff bi-matrices as in Table 4 and Table 5 respectively with

 a=α+β2,b=γ+η2,c=ξ+δ2λ+μ2;a′=α−β2,b′=γ−η2,c′=ξ−δ2λ−μ2.

### 3.3 Skew-Symmetric Game and Its Linear Representation

First, we present a necessary condition for verifying skew-symmetric games.

###### Proposition 20

Consider . If , then

 Vci=−Vc1W[κi−2,κ]W[κ,κi−1],i=2,⋯,n. (24)

Proof. Consider . According to Definition 13, we have

 cσ(1)(xσ−1(1),⋯,xσ−1(n))=−c1(x1,⋯,xn).

That is,

 Vcixix2⋯xi−1x1xi+1⋯xn=VciW[κi−1,κ]x2⋯xi−1x1xixi+1⋯xn=VciW[κi−1,κ]W[κ,κi−2]x1x2⋯xi−1xixi+1⋯xn=−Vc1x1x2⋯xn.

Hence, we have

 VciW[κi−1,κ]W[κ,κi−2]=sign(σ)Vc1=−Vc1.

Then, (24) follows from Proposition 7.

Next, we consider another necessary condition: If , then what condition should verify? An argument similar to the one used in Proposition 20 shows the following result.

###### Proposition 21

Consider . If , then

 Vc1δsκ[Iκn−1+W[κi−2,κ]W[κ,κi−3]⊗Iκn−i]=0, (25)

where

Proof. Assume satisfies . Let , . Then, we have

 Vc1x1x2⋯xn=−Vc1x1xix3⋯xi−1x2xi+1⋯xn=−Vc1x1W[κi−2,κ]W[κ,κi−3]x2⋯xn,i=3,4,⋯,n.

Since are arbitrary, we have

 Vc1x1=−Vc1x1W[κi−2,κ]W[κ,κi−3].

Setting , we have (25).

Note that the symmetric group is generalized by transpositions chepr . That is,

 Sn=⟨(1,i)|1

This fact motivates the following result.

###### Theorem 22

Consider .

1. If , then (24) is the necessary and sufficient condition for .

2. If , then (24) and (25) are necessary and sufficient conditions for .

Proof. We need only to prove the sufficiency.

• if (22) implies the sufficiency.

• if we divide our proof into two steps.

First, we prove the condition for a single payoff function . For any and without loss of generality, we assume

 σ:=(2,i1)(2,i2)⋯(2,it):=σ1∘σ2∘⋯∘σt,

where

From (25), it can be calculated that

 Vc1x1x2⋯xn=−Vc1x1xσ−1t(2)⋯xσ−1t(n)=(−1)2Vc1x1xσ−1t(σ−1t−1(2))⋯xσ−1t(σ−1t−1(n))⋮=(−1)tVc1x1xσ−1t(⋯(σ−11(2)))⋯xσ−1t(⋯(σ−11(n)))=sgn(σ)Vc1x1xσ−1(1)⋯xσ−1(n) (26)

Applying (24) to (26), we have

 Vcixix2⋯xi−1x1xi+1⋯xn=sgn(σ)Vcixσ−1(i)xσ−1(2)⋯xσ−1(i−1)x1xσ−1(i+1)⋯xσ−1(n). (27)

(27) implies that for any and we have

 (28)

Obviously, according to (19), (28) is the necessary and sufficient condition for a single payoff function to obey in a skew-symmetric game.

Next, we consider the condition for cross payoffs.

For any without loss of generality, we assume

 σ:=(1,i1)(1,i2)⋯(1,it):=σ1⋯σt,

where

Combining (24) with (25) yields

 Vcix1⋯xi−1xixi+1⋯xn=−Vcσt(i)xσ−1t(1)xσ−1t(2)⋯xσ−1t(n)=(−1)2Vcσt−1(σt(i))xσ−1t(σt−1(1))⋯xσ−1t(σt−1(n))⋮=(−1)tVcσ1(⋯(σt(i)))xσ−1t(⋯(σ−11(1)))⋯xσ−1t(⋯(σ−11(n)))=sgn(σ)Vcσ(i)xσ−1(1)xσ−1(2)⋯xσ−1(n) (29)

Clearly, (19) follows from (28) and (29).

###### Remark 23

When (24) degenerated

 Vc2=−Vc1W[κ,κ], (30)

which coincides with (22).

###### Example 24

Consider From Proposition 21 we have

 Vc1δs3[I32+W[3,3]]=0, s=1,2,3.

One can easily figure out that

 Vc1=[0,a1,a2,−a1,0,a3,−a2,−a3,0,0,b1,b2,−b1,0,b3,−b2,−b3,0,0,c1,c2,−c1,0,c3,−c2,−c3,0],

where are real numbers. According to Proposition 20, we can calculate that

 Vc2=[0,−a1,−a2,0,0,−b1,−b2,−c1,−c2,a1,0,−a3,b1,0,−b3,c1,0,−c3,a2,a3,0,b2,b3,0,c2,c3,0],Vc3=[0,0,0,a1,b1,c1,a2,b2,c2,−a1,−b1,−c1,0,0,0,a3,b3,c3,−a2,−b3,−c2,−a3,−b3,−c3,0,0,0].

According to Definition 13, a straightforward verification shows that

As a byproduct, we have .

In the following, we consider the linear representation of in

###### Definition 25

ser77 Let be a group and a finite dimensional vector space. A linear representation of in is a group homomorphism .

Consider a profile of a . We define two expressions of as follows:

• STP Form: The STP form of is expressed as

 s=⋉nj=1δijκ.
• Stacking Form: The strategy stacking form of is expressed as

 →s=⎡⎢ ⎢ ⎢ ⎢ ⎢⎣δi1κδi2κ⋮δinκ⎤⎥ ⎥ ⎥ ⎥ ⎥⎦.

Denote

 Φ:=⎡⎢ ⎢ ⎢ ⎢⎣Φ