Adjacency Matrix and Co-occurrence Tensor of General Hypergraphs: Two Well Separated Notions

12/21/2017 ∙ by Xavier Ouvrard, et al. ∙ CERN 0

Adjacency and co-occurrence are two well separated notions: even if they are the same for graphs, they start to be two different notions for uniform hypergraphs. After having done the difference between the two notions, this paper contributes in the definition of a co-occurrence tensor reflecting the general hypergraph structure. It is a challenging issue that can have many applications if properly achieved, as it will allow the study of the spectra of such general hypergraph. In most of the applications only an hypermatrix associated to the tensor is needed. In this article, a novel way of building a symmetric co-occurrence hypermatrix is proposed that captures also the cardinality of the hyperedges and allows full separation of the different layers of the hypergraph.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Hypergraphs were introduced in 1973 Berge and Minieka (1973). Hypergraphs have applications in many domains such as VLSI design, biology or collaboration networks. Edges of a graph allow to connect only two vertices where hyperedges of hypergraphs allow multiple vertices to be connected. Recent improvements in tensor spectral theory have made the research on the spectra of hypergraphs more relevant. For studying such spectra a proper definition of general hypergraph Laplacian tensor is needed and therefore the concept of adjacency has to be clearly defined and consequently an (-as it will be defined later- e-)adjacency tensor must be properly defined.

In Pu (2013) a clear distinction is made between the pairwise relationship which is a binary relation and the co-occurrence relationship which is presented as the extension of the pairwise relationship to a -adic relationship. The notion of co-occurrence is often used in linguistic data as the simultaneous appearance of linguistic units in a reference. The co-occurence concept can be easily extended to vertices contained in a hyperedge: we designate it in hypergraphs by the term e-adjacency.

Nonetheless it is more than an extension. Graph edges allow to connect vertices by pair: graph adjacency concept is clearly a pairwise relationship. At the same time in an edge only two vertices are linked. Also given an edge only two vertices can be e-adjacent. Thus adjacency and e-adjacency are equivalent in graphs.

Extending to hypergraphs the adjacency notion two vertices are said adjacent if it exists a hyperedge that connect them. Hence the adjacency notion still captures a binary relationship and can be modeled by an adjacency matrix. But e-adjacency is no more a pairwise relationship as a hyperedge being given more than two vertices can occur since a hyperedge contains vertices. Therefore it is a -adic relationship that has to be captured and to be modeled by tensor. Consequently adjacency matrix of a hypergraph and e-adjacency tensor are two separated notions. Nonetheless the e-adjacency tensor if often abusively named the adjacency tensor in the literature.

This article contributions are: 1. the definition of proper adjacency concept in general hypergraphs; 2. a process to achieve the transformation of a general hypergraph into a uniform hypergraph called uniformization process; 3. the definition of a new (e-)adjacency tensor which not solely preserves all the structural information of the hypergraph but also captures separately the information on the hyperedges held in the hypergraph.

After sketching the background and the related works on the adjacency and e-adjacency concepts for hypergraphs in Section 2, one proposal is made to build a new e-adjacency tensor which is built as unnormalized in Section 3. Section 4 tackles the particular case of graphs seen as 2-uniform hypergraphs and the link with DNF. Future works and Conclusion are addressed in Section 6. A full example is given in Appendix A.

Notation

Exponents are indicated into parenthesis - for instance - when they refer to the order of the corresponding tensor. Indices are written into parenthesis when they refer of a sequence of objects - for instance is the elements at row and column of the matrix -. The context should made it clear.

For the convenience of readability, it is written for . Hence given a polynomial , has to be understood as .

Given additional variables , it is written for .

is the set of permutations on the set .

2 Background and related works

Several definitions of hypergraphs exist and are reminded in Ouvrard and Marchand-Maillet (2018). Hypergraphs allow the preservation of the -adic relationship in between vertices becoming the natural modeling of collaboration networks, co-author networks, chemical reactions, genome and all situations where the 2-adic relationship allowed by graphs is not sufficient and where the keeping of the grouping information is important. Among the existing definitions the one of Bretto (2013) is reminded:

Definition 1.

An (undirected) hypergraph on a finite set of vertices (or vertices) is defined as a family of hyperedges where each hyperedge is a non-empty subset of .

Let be a hypergraph and a relation such that each hyperedge is mapped to a real number . The hypergraph is said to be a weighted hypergraph.

The -section of a hypergraph is the graph such that:

Let . a hypergraph is said to be uniform if all its hyperedges have the same cardinality .

A directed hypergraph on a finite set of vertices (or vertices) is defined as a family of hyperedges where each hyperedge contains exactly two non-empty subset of , one which is called the source - written - and the other one which is the target - written -.

In this article only undirected hypergraphs will be considered. In a hypergraph a hyperedge links one or more vertices together. The role of the hyperedges in hypergraphs is playing the role of edges in graphs.

Definition 2.

Let be a hypergraph.

The degree of a vertex is the number of hyperedges it belongs to. For a vertex , it is written or . It holds:

Given a hypergraph the incident matrix of an undirected hypergraph is defined as follow:

Definition 3.

The incidence matrix of a hypergraph is the rectangular matrix of , where .

As seen in the introduction defining adjacency in a hypergraph has to be distinguished from the e-adjacency in a hyperedge of a hypergraph.

Definition 4.

Let be a hypergraph. Let and be two vertices of this hypergraph.

and are said adjacent if it exists such that and .

Definition 5.

Let be a hypergraph. Let be an integer, , . For , let be vertices.

Then ,…, are said -adjacent if it exists such that for all , .

With , the usual notion of adjacency is retrieved.

If vertices are -adjacent then each subset of this vertices of size is -adjacent.

Definition 6.

Let be a hypergraph. Let .

The vertices constituting are said e-adjacent vertices.

If is -uniform then the -adjacency is equivalent to the e-adjacency of vertices in an edge.

For a general hypergraph, vertices that are -adjacent with have to co-occur - potentially with other vertices - in one edge. In this case the notions of -adjacency and of e-adjacency are actually distinct.

Adjacency matrix

The adjacency matrix of a hypergraph is related with the 2-adjacency. Several approaches have been made to define an adjacency matrix for hypergraphs.

In Bretto (2013) the adjacency matrix is defined as:

Definition 7.

The adjacency matrix is the square matrix which rows and columns are indexed by the vertices of and where for all , : and .

The adjacency matrix is defined in Zhou et al. (2007) as follow:

Definition 8.

Let be a weighted hypergraph.

The adjacency matrix of is the matrix of size defined as

where is the diagonal matrix of size containing the weights of the hyperedges of and is the diagonal matrix of size containing the degrees of the vertices of , where for all .

This last definition is equivalent to the one of Bretto for unweighted hypergraphs - ie weighted hypergraphs where the weight of all hyperedges is 1.

The problem of the matrix approach is that the multi-adic relationship is no longer kept as an adjacency matrix can link only pair of vertices. Somehow it doesn’t preserve the structure of the hypergraph: the hypergraph is extended in the 2-section of the hypergraph and the information is captured by this way.

Following a lemma cited in Dewar et al. (2016), it can be formulated:

Lemma 1.

Let be a hypergraph and let . If two vertices and are adjacent in then they are adjacent in the 2-section .

The reciprocal doesn’t hold as it would imply an isomorphism between and its 2-section .

Moving to the approach by e-adjacency will allow to keep the information on the structure that is held in the hypergraph.

e-adjacency tensor

In Michoel and Nachtergaele (2012) an unnormalized version of the adjacency tensor of a -uniform hypergraph is given. This definition is also adopted in Ghoshdastidar and Dukkipati (2017).

Definition 9.

The unnormalized ([Author’s note]: -)adjacency tensor of a -uniform hypergraph on a finite set of vertices and a family of hyperedges of equal cardinality is the tensor such that:

In Cooper and Dutle (2012) a slightly different version exists for the definition of the adjacency tensor, called the degree normalized -adjacency tensor

Definition 10.

The ([Author’s note]: degree normalized -)adjacency tensor of a -uniform hypergraph on a finite set of vertices and a family of hyperedges of equal cardinality is the tensor such that:

This definition by introducing the coefficient allows to retrieve the degree of a vertex summing the elements of index on the first mode of the tensor. Also it will be called the degree normalized adjacency tensor.

Proposition 1.

Let be a -uniform hypergraph. Let be a vertex. It holds by considering the degree normalized adjacency tensor :

Proof.

On the first mode of the degree normalized adjacency tensor, for a given vertex that occurs in a hyperedge the elements where which exist in quantity in the first mode. Hence .

Therefore doing it for all hyperedges where is an element allows to retrieve the degree of .

This definition could be interpreted as the definition of the e-adjacency tensor for a uniform hypergraph since the notion of -adjacency and e-adjacency are equivalent in a -uniform hypergraph.

In Hu (2013) a full study of the spectra of an uniform hypergraph using the Laplacian tensor is given. The definition of the Laplacian tensor is linked to the existence and definition of the normalized ([Author’s note]: -)adjacency tensor.

Definition 11.

The ([Author’s note]: eigenvalues) normalized ([Author’s note]: -)adjacency tensor of a -uniform hypergraph on a finite set of vertices and a family of hyperedges of equal cardinality is the tensor such that:

The aim of the normalization is motivated by the bounding of the different eigenvalues of the tensor.

The normalized Laplacian tensor is given in the following definition.

Definition 12.

The normalized Laplacian tensor of a -uniform hypergraph on a finite set of vertices and a family of hyperedges of equal cardinality is the tensor where is the -th order -dimensional diagonal tensor with the -th diagonal element if and 0 otherwise.

In Banerjee et al. (2017) the definition is extended to general hypergraph.

Definition 13.

Let on a finite set of vertices and a family of hyperedges . Let be the maximum cardinality of the family of hyperedges.

The adjacency hypermatrix of written is such that for a hyperedge: of cardinality .

with , …, chosen in all possible way from with at least once from each element of .

The other position of the hypermatrix are zero.

The first problem in this case is that the notion of -adjacency as it has been mentioned earlier is not the most appropriated for a general hypergraph where the notion of e-adjacency is much stronger. The approach in Shao (2013) and Pearson and Zhang (2014) consists in the retrieval of the classical adjacency matrix for the case where the hypergraph is 2-uniform - ie is a graph - by keeping their degree invariant: therefore the degree of each vertex can be retrieved on the first mode of the tensor by sum.

In Hu (2013) the focus is made on the spectra of the tensors obtained: the normalization is done to keep eigenvalues of the tensor bounded. Extending this approach for general hypergraph, Banerjee et al. (2017) spreads the information of lower cardinality hyperedges inside the tensor. This approach focuses on the spectra of the hypermatrix built. The e-adjacency cubical hypermatrix of order is kept at a dimension of the number of vertices at the price of splitting elements. Practically it could be hard to use as the number of elements to be described for just one hyperedge can explode. Indeed for each hyperedge the partition of in parts has to be computed.

The number of partitions of an integer in part is given by the formula:

This formula is obtained by considering the disjunctive case for splitting in part:

  • either the last part is equal to 1, and then has to be divided in ;

  • or (exclusive) the parts are equals to at least 2, and then has to be divided in .

First values of the number of partitions are given in Table 1.

m\s 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
3 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
4 1 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
5 1 2 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 1 3 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
7 1 3 4 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
8 1 4 5 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
9 1 4 7 6 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
10 1 5 8 9 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11 1 5 10 11 10 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0
12 1 6 12 15 13 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0
13 1 6 14 18 18 14 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0 0
14 1 7 16 23 23 20 15 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0 0
15 1 7 19 27 30 26 21 15 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0 0
16 1 8 21 34 37 35 28 22 15 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0 0
17 1 8 24 39 47 44 38 29 22 15 11 7 5 3 2 1 1 0 0 0 0 0 0 0 0
18 1 9 27 47 57 58 49 40 30 22 15 11 7 5 3 2 1 1 0 0 0 0 0 0 0
19 1 9 30 54 70 71 65 52 41 30 22 15 11 7 5 3 2 1 1 0 0 0 0 0 0
20 1 10 33 64 84 90 82 70 54 42 30 22 15 11 7 5 3 2 1 1 0 0 0 0 0
21 1 10 37 72 101 110 105 89 73 55 42 30 22 15 11 7 5 3 2 1 1 0 0 0 0
22 1 11 40 84 119 136 131 116 94 75 56 42 30 22 15 11 7 5 3 2 1 1 0 0 0
23 1 11 44 94 141 163 164 146 123 97 76 56 42 30 22 15 11 7 5 3 2 1 1 0 0
24 1 12 48 108 164 199 201 186 157 128 99 77 56 42 30 22 15 11 7 5 3 2 1 1 0
25 1 12 52 120 192 235 248 230 201 164 131 100 77 56 42 30 22 15 11 7 5 3 2 1 1
Table 1: Number of partitions of size of an integer

This number of partition gives the number of elements to be specified for a single hyperedge in the Banerjee’s hypermatrix, as they can’t be obtained directly by permutation. This number varies depending on the cardinality of the hyperedge to be represented. This variation is not a monotonic function of the size .

The value of to be used for a given hyperedge of size for a maximal cardinality of the Banerjee’s adjacency tensor is given in Table 2. This value also reflects the number of elements to be filled in the hypermatrix for a single hyperedge.

5 10 15 20 25 30 35 40
1 1 1 1 1 1 1 1 1
2 30 896 32766 956196 33554430 996183062 34359738366 1030588363364
3 50 23640 6357626 1553222032 382505554925 94743186241770 22960759799383757 5611412540548420920
4 20 100970 135650970 149440600896 158221556736195 164769169326140215 170721045139376180665 176232934305968169141592
5 1 125475 745907890 2826175201275 9506452442642751 30773997163632534765 98200286674992772689630 311409618017926342757598795
6 x 61404 1522977456 17420742448158 158199194804672560 1322183126915502403463 10690725777258446036242741 85180421514142371562050204468
7 x 14280 1425364941 46096037018576 1024206402004025515 19673349126500416962615 354878263731993584768297882 6217590037131694711658104802268
8 x 1500 702714870 61505129881418 3154352367940801390 129129229794015955874175 4769303064589903155918576810 167503457011878955780131372020240
9 x 90 201328985 46422598935960 5267776889834101885 437004824231068745652585 31134364616525428333788664160 2051990575671846572076732402739560
10 x 1 35145110 21559064120035 5237969253953146975 848748719343315752120887 111787775515270562752918708505 13174986533143342163734795019830855
11 x x 3709706 6508114071602 3332426908789146245 1023444669605845490919630 241305539520076885874877723856 49059583248616094623568196287767720
12 x x 242970 1320978392032 1430090837664465640 814611609439944701336120 334883841129942857103836783480 114204835945488341535343378586826510
13 x x 9100 184253421690 429168957710189920 448888886709990497395170 315061943784480485752922317100 176097407919167018972821102617824800
14 x x 210 17758229920 92361393090110900 177434686702809581360280 209636307340035341769456805590 188390878586504393731248560781565540
15 x x 1 1182354420 14515221518630650 51629112999502425355050 101972261667580282621340734042 145207225656117240323230829098848300
16 x x x 53422908 1686842411440120 11274940758810423952590 37193647457294620660325206920 83124043946911069759380261652009018
17 x x x 1637610 145857986021220 1875745279587180337830 10373941738039097562798529130 36202281770971401316508548887148260
18 x x x 31350 9387370139400 240458041631247630090 2247098355408068243367808830 12227164493902961371079076114591450
19 x x x 380 446563570200 23950282001673975675 382710033315178514982029070 3252386812566620163782349432515670
20 x x x 1 15571428950 1862767268307916425 51758773473472067323039950 690009783002559481444810135863737
21 x x x x 390169010 113301447816411855 5602215923984438576703270 117978632939681392614390018854490
22 x x x x 6932200 5375646410875455 488160287033902614520290 16396955289494938961248184877710
23 x x x x 80500 197788491523350 34380160285907377001220 1865425003253790074111730106860
24 x x x x 600 5587457302050 1960619958296697461400 174704650201012418163972506640
25 x x x x 1 119813107050 90483896754284001150 13528775872638975527061789150
26 x x x x x 1909271637 3368998127887283892 868981935345151947003947262
27 x x x x x 22143240 100617182607307212 46381804383191991754075704
28 x x x x x 172550 2391172870380140 2057782621039570457724152
29 x x x x x 870 44721107569820 75781801182259804328840
30 x x x x x 1 649591878320 2309066362145733662940
31 x x x x x x 7166900664 57915248685968404016
32 x x x x x x 58538480 1187293166698640716
33 x x x x x x 327250 19717915340636370
34 x x x x x x 1190 262203877675610
35 x x x x x x 1 2751867046110
36 x x x x x x x 22273515966
37 x x x x x x x 135074420
38 x x x x x x x 568100
39 x x x x x x x 1560
40 x x x x x x x 1
Table 2: Number of elements to be filled for a hyperedge of size given maximal cardinality

In this article, the proposed method to elaborate an e-adjacency tensor focuses on the interpretability of the construction: a uniformization process is proposed in which a general hypergraph is transformed in a uniform hypergraph by adding to it elements. The strong link made with homogeneous polynomials reinforce the choice made and allow to retrieve proper matrix of a uniform hypergraph at the end of the process. The additional vertices help to capture not solely the e-adjacency but also give the ability to hold the -adjacency whatever the level it occurs.

The approach is based on the homogeneisation of sums of polynomials of different degrees and by considering a family of uniform hypergraphs. It is also motivated by the fact that the information on the cardinality of the hyperedges has to be kept in some ways and that the elements should not be mixed with the different layers of the hypergraph.

3 Towards an e-adjacency tensor of a general hypergraph

To build an e-adjacency tensor for a general hypergraph we need a way to store elements which represent the hyperedges. As these hyperedges have different cardinalities, the representation of the e-adjacency of vertices in a unique tensor can be achieved only by filling the hyperedges with additional elements. The problem of finding an e-adjacency tensor of a general hypergraph is then transformed in a uniformization problem.

This uniformisation process should be at least interpretable in term of uniform hypergraphs. It should capture the structural information of the hypergraph, which includes information on number of hyperedges, degrees of vertices and additional information on the profile of the hypergraph.

We propose a framework based on homogeneous polynomials that are iteratively summed by weighting with technical coefficients and homogeneized. This uniformisation process allows the construction of a weighted uniform hypergraph. The technical coefficients are adjusted to allow the handshake lemma to hold in the built uniform hypergraph.

3.1 Family of tensors attached to a hypergraph

Let be a hypergraph. A hypergraph can be decomposed in a family of uniform hypergraphs. To achieve it, let be the equivalency relation: .

is the set of classes of hyperedges of same cardinality. The elements of are the sets: .

Let , called the range of the hypergraph

Considering , it is set for all .

Let consider the hypergraphs: for all which are all -uniform.

It holds: and for all , hence formed a partition of which is unique by the way it has been defined.

Before going forward the sum of two hypergraphs has to be defined:

Definition 14.

Let and be two hypergraphs. The sum of these two hypergraphs is the hypergraph written defined as:

This sum is said direct if . In this case the sum is written .

Hence:

The hypergraph is said to be decomposed in a family of hypergraphs where is -uniform.

An illustration of the decomposition of a hypergraph in a family of uniform hypergraphs is shown in Figure 1. This family of uniform hypergraphs decomposes the original hypergraph in layers. A layer holds a -uniform hypergraph (): therefore the layer is said to be of level .

Figure 1: Illustration of a hypergraph decomposed in three layers of uniform hypergraphs

Therefore, at each -uniform hypergraph can be mapped a (-adjacency) e-adjacency tensor of order which is hypercubic and symmetric of dimension . This tensor can be unnormalized or normalized.

Choosing one type of tensor - normalized or unnormalized for the whole family of - the hypergraph is then fully described by the family of e-adjacency tensors . In the case where all the are chosen normalized this family is said pre-normalized. The final choice will be made further in Sub-Section 3.7 and explained to fullfill the expectations listed in the next Sub-Section.

3.2 Expectations for an e-adjacency tensor for a general hypergraph

The definition of the family of tensors attached to a general hypergraph has the advantage to open the way to the spectral theory for uniform hypergraphs which is quite advanced.

Nonetheless many problems remain in keeping a family of tensors of different orders: studying the spectra of the whole hypergraph could be hard to achieve by this means. Also it is necessary to get an e-adjacency tensor which covers the whole hypergraph and which retains the information on the whole structure.

The idea behind is to “fill” the hyperedges with sufficient elements such that the general hypergraph is transformed in an uniform hypergraph through a uniformisation process. A similar approach has been taken in Banerjee et al. (2017) where the filling elements are the vertices belonging to the hyperedge itself. In the next subsections the justification of the approach taken will be made via homogeneous polynomials. Before getting to the construction, expected properties of such a tensor have to be listed.

Expectation 1.

The tensor should be symmetric and its generation should be simple.

This expectation emphasizes the fact that in between two built e-adjacency tensor, the one that can be easily generated has to be chosen: it includes the fact that the tensor has to be described in a simple way.

Expectation 2.

The tensor should be invariant to vertices permutation either globally or at least locally.

This expectation is motivated by the fact that in a hyperedge the vertices have no order. The fact that this expectation can be local remains in the fact that added special vertices will not have the same status that the one from the original hypergraph. Also the invariance by permutation is expected on the vertices of the original hypergraph.

Expectation 3.

The e-adjacency tensor should allow the retrieval of the hypergraph it is originated from.

This expectation seems important to rebuild properly the original hypergraph from the e-adjacency tensor: all the necessary information to retrieve the original hyperedges has to be encoded in the tensor.

Expectation 4.

Giving the choice of two representations the sparsest e-adjacency tensor should be chosen.

Sparsity allows to compress the information and so to gain in place and complexity in calculus. Also sparsity is a desirable property for some statistical reasons as shown in Nikolova (2000) or expected in Bruckstein et al. (2009) for signal processing and image encoding.

Expectation 5.

The e-adjacency tensor should allow the retrieval of the vertex degrees.

In the adjacency matrix of a graph the information on the degrees of the vertices is encoded directly. It is still the case, as it has been seen, with the -adjacency degree normalised tensor that has been defined by Shao (2013) and Pearson and Zhang (2014).

3.3 Tensors family and homogeneous polynomials family

To construct an homogeneous polynomial representing a general hypergraph, the family of e-adjacency tensors obtained in the previous Subsection is mapped to a family of homogeneous polynomials. This mapping is used in Comon et al. (2015) where the author links symmetric tensors and homogeneous polynomials of degree to show that the problem of the CP decomposition of different symmetric tensors of different orders and the decoupled representation of multivariate polynomial maps are related.

3.3.1 Homogeneous polynomials family of a hypergraph

Let be a field. Here .

Let be a cubical tensor of order and dimension with values in .

Definition 15.

Let define the Segre outerproduct of and as:

More generaly as given in Comon et al. (2008) the outerproduct of vectors , …, is defined as:

Let , , be the canonical basis of .

is a basis of .

Then can be written as:

The notation will be used for the corresponding hypermatrix of coefficients where .

Let , with using the Einstein convention.

In Lim (2013) a multilinear matrix multiplication is defined as follow:

Definition 16.

Let and for .

is the multilinear matrix multiplication and defined as the matrix of of coefficients:

for with .

Afterwards only vectors are needed and is cubical of order and dimension . Writing ,

Therefore contains only one element written :

(1)

Therefore considering a hypergraph with its family of unnormalized tensor , it can be also attached a family of homogenous polynomials with .

The formulation of can be reduced taking into account that is symmetric for a uniform hypergraph:

(2)

Writing:

(3)

the reduced form of , it holds:

Writing for :

and for ,

It holds:

(4)

and:

(5)

3.3.2 Reversibility of the process

Reciprocally, given a homogeneous polynomial of degree a unique hypercubic tensor of order can be built: its dimension is the number of different variables in the homogeneous polynomial. If the homogeneous polynomial of degree is supposed reduced and ordered then only one hypercubic and symmetric hypermatrix can be built. It reflects uniquely a -uniform hypergraph adding the constraint that each monomial is composed of the product of different variables.

Proposition 2.

Let be a homogeneous polynomial of degree where:

  • for :

  • for all :

  • and such that for all : .

Then is the homogeneous polynomial attached to a unique -uniform hypergraph - up to the indexing of vertices.

Proof.

Considering the vertices labellized by the elements of .

If then for all : a unique hyperedge is attached corresponding to the vertices and which has weight .

3.4 Uniformisation and homogeneisation processes

A single tensor is always easier to be used than a family of tensors; the same apply for homogeneous polynomials. Building a single tensor from different order tensors requires to fill in the “gaps”; summing homogeneous polynomials of varying degrees always give a new polynomial: but, most frequently this polynomial is no more homogeneous. Homogeneisation techniques for polynomials are well known and require additional variables.

Different homogeneisation process can be envisaged to get a homogeneous polynomial that represents a single cubic and symmetric tensor by making different choices on the variables added in the homogeneisation phase of the polynomial. As a link has been made between the variables and the vertices of the hypergraph, we want that this link continue to occur during the homogeneisation of the polynomial as each term of the reduced polynomial corresponds to a unique hyperedge in the original hypergraph; the homogenisation process is interpretable in term of hypergraph uniformisation process of the original hypergraph: hypergraph uniformisation process and polynomial homogeneisation process are the two sides of the same coin.

So far, we have separated the original hypergraph in layers of increasing -uniform hypergraphs such that

Each -uniform hypergraph can be represented by a symmetric and cubic tensor. This symmetric and cubic tensor is mapped to a homogeneous polynomial. The reduced homogeneous polynomial is interpretable, if we omit the coefficients of each term, as a disjunctive normal form. Each term of the homogeneous polynomial is a cunjunctive form which corresponds to simultaneous presence of vertices in a hyperedge: adding all the layers allows to retrieve the original hypergraph; adding the different homogeneous polynomials allows to retrieve the disjunctive normal form associated with the original hypergraph.

In the hypergraph uniformisation process, iterative steps are done starting with the lower layers to the upper layers of the hypergraph. In parallel, the polynomial homogeneisation process is the algebraic justification of the hypergraph uniformisation process. It allows to retrieve a polynomial attached to the uniform hypergraph built at each step and hence a tensor.

3.4.1 Hypergraph uniformisation process

We can describe algorithmically the hypergraph uniformisation process: it transforms the original hypergraph in a uniform hypergraph.

Initialisation

The initialisation requires that each layer hypergraph is associated to a weighted hypergraph.

To each uniform hypergraph , we associate a weighted hypergraph , with: , .

The coefficients are technical coefficients that will be chosen when considering the homogeneisation process and the fullfillment of the expectations of the e-adjacency tensor. The coefficients can be seen as dilatation coefficients only dependent of the layers of the original hypergraph.

We initialise:

and

and generate distinct vertices , that are not in .

Iterative steps

Each step in the hypergraph uniformisation process includes three phases: an inflation phase, a merging phase and a concluding phase.

Inflation phase:

The inflation phase consists in increasing the cardinality of each hyperedge obtained from the hypergraph built at the former step to reach the cardinality of the hyperedges of the second hypergraph used in the merge phase.

Definition 17.

The -vertex-augmented hypergraph of a weighted hypergraph is the hypergraph obtained by the following rules

  • ;

  • ;

  • Writing the map such that for , it holds:

    • ;

    • , .

Proposition 3.

The vertex-augmented hypergraph of a -uniform hypergraph is a -uniform hypergraph.

The inflation phase at step generates from the -vertex augmented hypergraph .

As is -uniform at step , is -uniform

Merging phase:

The merging phase generates the sum of two weighted hypergraphs called the merged hypergraph.

Definition 18.

The merged hypergraph of two weighted hypergraphs and is the weighted hypergraph defined as follow:

  • and

The merging phase at step generates from and the merged hypergraph . As it is generated from two -uniform hypergraph it is also a -uniform hypergraph.

Step ending phase:

If equals the iterative part ends up and return .

Otherwise a next step is need with and .

Termination:

We obtain by this algorithm a weighted -uniform hypergraph associated to which is the returned hypergraph from the iterative part: we write it .

Definition 19.

Writing

is called the -layered unifom of .

Proposition 4.

Let be a hypergraph of order

Let consider such that and let be the -layered unifom of . Then:

  • is a partition of .

Proof.

The way the -layered uniform of is generated justifies the results.

Proposition 5.

Let be a hypergraph of order

Let consider such that , and let be the -layered unifom of .

Then:

Vertices of that are e-adjacent in in an hyperedge are e-adjacent with the vertices of in .

Reciprocally, if vertices are e-adjacent in , the ones that are not in are e-adjacent in .

As a consequence, captures the e-adjacency of .

3.4.2 Polynomial homogeneisation process

In the polynomial homogeneisation process, we build a new family of homogeneous polynomials of degree iteratively from the family of homogeneous polynomials by following the subsequent steps that respect the phases of construction in Figure 2. Each of these steps can be linked to the steps of the homogeneisation process.

Hypergraph

Uniformization process

Uniform hypergraphs
layers of

Family of tensors

Family of homogeneous
polynomials

if

If :
Homogeneous
polynomial of
aggregated layers
of level 1 to

Global homogeneous
polynomial of

if

e-adjacency
tensor of
Figure 2: Different phases of the construction of the e-adjacency tensor

Initialisation

Each polynomial , attached to the corresponding layer -uniform hypergraph is multiplied by a coefficient equals to the dilatation coefficients of the hypergraph uniformisation process. represents the reduced homogeneous polynomial attached to .

We initialise:

and

We generate distinct 2 by 2 variables , that are also distinct 2 by 2 from the , .

Iterative steps

At each step, we sum the current with the next layer coefficiented polynomial in a way to obtain a homogeneous polynomial . To help the understanding we describe the first step, then generalise to any step.

Case : To build an homogeneization of the sum of and is needed. It holds:

To achieve the homogeneization of a new variable is introduced.

It follows for :

By continuous prolongation of , it is set:

In this step, the degree 1 coefficiented polynomial attached to is transformed in a degree 2 homogeneous polynomial : corresponds to the homogeneous polynomial of the weighted -vertex-augmented 1-uniform hypergraph