 # Note on "The Complexity of Counting Surjective Homomorphisms and Compactions"

Focke, Goldberg, and Živný (arXiv 2017) prove a complexity dichotomy for the problem of counting surjective homomorphisms from a large input graph G without loops to a fixed graph H that may have loops. In this note, we give a short proof of a weaker result: Namely, we only prove the #P-hardness of the more general problem in which G may have loops. Our proof is an application of a powerful framework of Lovász (2012), and it is analogous to proofs of Curticapean, Dell, and Marx (STOC 2017) who studied the "dual" problem in which the pattern graph G is small and the host graph H is the input. Independently, Chen (arXiv 2017) used Lovász's framework to prove a complexity dichotomy for counting surjective homomorphisms to fixed finite structures.

Comments

There are no comments yet.

## Authors

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Preliminaries

Let be the set of all unlabeled, finite graphs that may have loops and multiple edges. Let . We denote the vertex set of  with , the set of its loops with , the set of its non-loop edges with , and the set of all edges with . Let be the number of homomorphisms from  to . Let be the number of automorphisms of . Let be the number of vertex-surjective homomorphisms from  to  (note that this is different from the notion in [2, 5], where surjectivity has to hold also for edges). Let be the number of “compactions” from  to , that is, the number of homomorphisms that are surjective on the vertices and non-loop edges of . For a set , we denote the subgraph of  induced by the vertices of  with . Let be the number of induced subgraphs of  that are isomorphic to . Let be the number of subgraphs  isomorphic to  that are obtained from  by deleting vertices or non-loop edges; that is, we have and , while holds.

Analogous to the setup in [2, Section 3], we view these counting functions as infinite matrices indexed by graphs , which are ordered by their total size . Then and are lower triangular matrices with diagonal entries , and and are upper triangular matrices with s on their diagonals. In particular, these matrices are invertible.

## 2 Previous results

A graph is called reflexive if all loops are present, and a graph is called irreflexive if it has no loops. Let  be the family of all graphs that are disjoint unions of irreflexive bicliques and reflexive cliques.

###### Theorem (Dyer & Greenhill ).

If , then can be computed in polynomial time. Otherwise the problem is -hard, even when the input graphs are restricted to be irreflexive.

Let be the family of all graphs that are disjoint unions of irreflexive stars and reflexive cliques of size at most two.

###### Theorem (Focke, Goldberg, and Živný ).

If , then can be computed in polynomial time. Otherwise the problem is -hard, even when the input graphs are restricted to be irreflexive.

###### Theorem (Focke, Goldberg, and Živný ).

If , then can be computed in polynomial time. Otherwise the problem is -hard, even when the input graphs are restricted to be irreflexive.

## 3 Proof of weaker versions of Theorems 2 and 2

In this section, we establish the algorithms of Theorems 2 and 2, and prove the weaker version of the hardness claims by a reduction from Theorem 2; that is, our reduction produces input graphs that may have loops. Every homomorphism from  to  is vertex-surjective on its image under , so the following identities hold:

 Hom(G,H) =∑S⊆V(H)VertSurj(G,H[S]). (1) VertSurj(G,H) =∑S⊆V(H)(−1)|V(H)∖S|⋅Hom(G,H[S]). (2)

The second equation is the inversion of the first one, and one way to obtain it is by an application of the principle of inclusion and exclusion. (Another way is to observe that (1) is equivalent to the matrix identity analogously to how this was done in [2, Section 3]; inverting yields the matrix identity that is equivalent to (2)). For compactions we get similar identities from the fact the every homomorphism from  to  is a compaction to a subgraph of  obtained by deleting vertices and non-loop edges. We obtain , and we expand this equation and its inversion for convenience. For all , we have:

 Hom(G,H) =∑F∈GComp(G,F)⋅S˙ub(F,H), (3) Comp(G,H) =∑F∈GHom(G,F)⋅S˙ub−1(F,H). (4)

Note that the sum in (3) is indeed finite since holds only for finitely many graphs , namely certain subgraphs of . Since is an infinite upper triangular matrix with s on its diagonal, it has an inverse matrix , which is also upper triangular with s on its diagonal, and so holds only if , and the sum in (4) is also finite.

### 3.1 Algorithms

The algorithms for Theorems 2 and 2 immediately follow from (2) and (4) since we want to compute the left sides of the equations and can, respectively, compute the right sides in polynomial time using Theorem 2. For the case of , note that deleting any vertices of again yields a graph in . For the case of , note that deleting any vertices and non-loop edges of yields a graph , and that holds. (Indeed, is the unique maximal subset of  that is closed under taking subgraphs in the sense of .)

### 3.2 Hardness

We use two ingredients. The first is the following fact for the disjoint union [5, (5.28)].

 Hom(G∪F,H)=Hom(G,H)⋅Hom(F,H) (5)

The second is the following lemma proved by Lovász.

###### Lemma (Proposition 5.43 in ).

Let be a finite set of unlabeled graphs that is closed in the sense that, for all , the set  contains all homomorphic images  of . Then the -matrix  with is invertible.

The following lemma is completely analogous to its dual version in [2, Lemma 3.6].

###### Lemma.

Let be a function of finite support and let be the graph parameter with

 f(G)=∑H∈Gα(H)⋅Hom(G,H). (6)

When given oracle access to , we can compute in polynomial time for all .

###### Proof.

Let be the support of , that is, the set of all graphs  with . Let be the set of all homomorphic images of graphs in . For each , we have:

 f(G∪F)=∑H∈Gα(H)⋅Hom(G,H)⋅Hom(F,H). (7)

We define a vector

with . Let be the -matrix with . By the previous lemma, this matrix is invertible. Finally, let be the vector with . Then can be written as the matrix-vector product: . Thus we have . The vector  can be computed in polynomial time by querying the oracle for the values . The matrix  can be computed in constant time since it only depends on the fixed function . Thus we can compute the entire vector . In particular, for each , we can determine  via the identity since .

Applying this lemma to and yields the hardness of Theorems 2 and 2. The reason is that both functions  can written as a linear combination of  via (2) and (4), and in both cases a counting function that is hard by Theorem 2 appears in the support of .

For Theorem 2, note that satisfies since is the only term in (1) where is isomorphic to . Thus if , then is -hard by Theorem 2, and so is -hard by Lemma 3.2.

For Theorem 2, recall that holds. Thus if , then is -hard by Theorem 2 and it reduces to by Lemma 3.2, so the latter is hard as well. Now suppose . Then there is a non-loop edge  such that . Clearly is -hard by Theorem 2. It remains to show that so that Lemma 3.2 reduces to . Using the fact that and are upper triangular matrices and that

is the infinite identity matrix, we have:

 0 =(S˙ub⋅S˙ub−1)(H−e,H)=S˙ub(H−e,H−e)=1⋅S˙ub−1(H−e,H)+S˙ub(H−e,H)⋅S˙ub−1(H,H)=1.

This implies as required.

#### Acknowledgments.

I thank Jacob Focke, Leslie Ann Goldberg, and Standa Živný for comments on an earlier version of this note, and for subsequent discussions at the Dagstuhl Seminar 17341 on “Computational Counting” in August 2017. I thank Radu Curticapean and Marc Roth for many discussions and comments.