# Algorithms for weighted independent transversals and strong colouring

An independent transversal (IT) in a graph with a given vertex partition is an independent set consisting of one vertex in each partition class. Several sufficient conditions are known for the existence of an IT in a given graph with a given vertex partition, which have been used over the years to solve many combinatorial problems. Some of these IT existence theorems have algorithmic proofs, but there remains a gap between the best bounds given by nonconstructive results, and those obtainable by efficient algorithms. Recently, Graf and Haxell (2018) described a new (deterministic) algorithm that asymptotically closes this gap, but there are limitations on its applicability. In this paper we develop a randomized version of this algorithm that is much more widely applicable, and demonstrate its use by giving efficient algorithms for two problems concerning the strong chromatic number of graphs.

Comments

There are no comments yet.

## Authors

• 2 publications
• 17 publications
• 2 publications
• ### Finding Independent Transversals Efficiently

We give an efficient algorithm that, given a graph G and a partition V_1...
11/06/2018 ∙ by Alessandra Graf, et al. ∙ 0

read it

• ### A fast new algorithm for weak graph regularity

We provide a deterministic algorithm that finds, in ϵ^-O(1) n^2 time, an...
01/12/2018 ∙ by Jacob Fox, et al. ∙ 0

read it

• ### Variable degeneracy on toroidal graphs

Let f be a nonnegative integer valued function on the vertex-set of a gr...
07/13/2019 ∙ by Rui Li, et al. ∙ 0

read it

• ### The Complexity of Finding Fair Independent Sets in Cycles

Let G be a cycle graph and let V_1,…,V_m be a partition of its vertex se...
11/03/2020 ∙ by Ishay Haviv, et al. ∙ 0

read it

• ### Approximation algorithms for maximally balanced connected graph partition

Given a simple connected graph G = (V, E), we seek to partition the vert...
10/06/2019 ∙ by Yong Chen, et al. ∙ 0

read it

• ### Continuous optimization

Sufficient conditions for the existence of efficient algorithms are esta...
09/03/2019 ∙ by Xiaopeng Luo, et al. ∙ 0

read it

• ### Graphs without 2-community structures

In the context of community structure detection, we study the existence ...
06/28/2018 ∙ by Cristina Bazgan, et al. ∙ 0

read it

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Let be a graph with a partition of its vertices. The elements of are non-empty subsets of , which we refer to as blocks. We say that an independent set of is an independent transversal (IT) of with respect to if for all ; likewise, we say that is a partial independent transversal (PIT) of with respect to if for all .

Many combinatorial problems can be formulated in terms of the existence of an IT in a graph with respect to a given vertex partition (see e.g. [26]). Various known results give sufficient conditions for the existence of an IT (e.g. [2, 11, 22, 23, 5, 9]). Furthermore, some of these results have algorithmic versions, mostly relying on algorithmic versions of the Lovász Local Lemma (LLL), that can efficiently return an IT given a graph and vertex partition (see [10, 7, 18, 20, 13, 19]). However, the IT existence theorems (in particular [22, 23]) that give the best possible bounds do not give efficient algorithms for finding an IT.

Recently, a new algorithm, called  [14], has been shown to find (for a large class of graphs) either an IT or a set of blocks that has a small dominating set in . The algorithm uses ideas from the original (non-algorithmic) proof of the corresponding existence result due to Haxell [22, 23] and modifications of several key notions (including “lazy updates”) introduced by Annamalai in [6, 7]. The bounds given by are asymptotically best possible, but it is applicable only in certain restricted circumstances. The aim of this paper is to obtain a new randomized algorithm that is more widely applicable and removes some of the technical limitations.

To describe and its features and limitations, we require a few definitions. A graph is -claw-free with respect to vertex partition if no vertex of has independent neighbours in distinct blocks. We say that a subset dominates a vertex set in if for all , there exists for some . (This is often referred to as strong domination or total domination, but since it is the only notion of domination that we will refer to, we will use the simpler term.) For a subset of the vertex partition, we write . For , we let denote the induced subgraph on .

A constellation for is a vertex set , such that the components of are stars with at least two vertices, each with a centre and a nonempty set of leaves distinct from its centre, with the additional property that the set of all leaves of forms a PIT of with respect to of size . We denote the set of leaves of by .

The following result was shown for in [14].

###### Theorem 1 ([14]).

The algorithm takes as input parameters , and a graph with vertex partition such that is -claw-free with respect to and finds either:

1. an IT in , or

2. a non-empty set and a vertex set such that dominates in and . Moreover contains a set which is a constellation of some , and .

For fixed values of and , the runtime is .

Thus either finds an IT in or a set of blocks and relatively small set of vertices such that dominates , and the induced graph contains a nice structure.

The maximum degree of a graph is the maximum cardinality of the set of neighbours of a vertex over all vertices . Note that any graph is -claw-free with respect to any partition for . It is easy to show that for graphs with vertex partition such that for all , there cannot exist a set of blocks such that is dominated by a set of size less than . This leads to the following result.

###### Corollary 2 ([14]).

Let be given. Then for and , the algorithm takes as input any graph with maximum degree and any vertex partition such that for all and returns, in time , an IT in .

From a combinatorial point of view, this is within one of being best possible, since [32] showed that blocks of size are not sufficient to guarantee the existence of an IT while [22, 23] showed that blocks of size are always sufficient. The IT existence theorems of [22, 23] have been used to solve many combinatorial problems. Thus Theorem 1 and Corollary 2 offer the possibility of obtaining new algorithmic proofs for these results. Some example applications were outlined in [14], with further details and more applications described in [15].

From an algorithmic point of view, however, the effectiveness of the algorithm is limited by its dependence on and (or on in the case of Corollary 2), making it efficient only when these parameters are constant. Many of the LLL-based algorithmic constructions give an IT under the condition that the block size satisfies , where is a constant strictly larger than . The construction of [18] has the condition , which is the strongest known criterion of this form. These algorithms are polynomial time even for unbounded .

The aim of this paper is to give a new (randomized) algorithm that overcomes this limitation of in the case of Corollary 2. In addition, we will strengthen it combinatorially by showing results for weighted ITs in the setting of vertex-weighted graphs. A number of combinatorial constructions depend on such weighted ITs, and Corollary 2 (which merely shows the existence of an IT without regard to weight) is not sufficient for these applications.

For a weight function and a subset , we write . We also write . Our main theorem is as follows.

###### Theorem 3.

There is a randomized algorithm which takes as inputs a parameter , a graph partitioned into blocks of size exactly for some , and a weight function , and finds an IT in with weight at least . For fixed , the expected runtime is .

If all vertices have weight 1, this gives the immediate corollary:

###### Corollary 4.

There is a randomized algorithm which takes as inputs a parameter and a graph partitioned into blocks of size at least , and finds an IT of . For fixed , the expected runtime is .

In particular, Corollary 4 has Corollary 2 as a special case (for constant , we can take ), and is also stronger than all the previous LLL-based constructions (since we can also take to be an arbitrary fixed constant and allow to vary freely).

The principal ingredients in the proof of Theorem 3 are the algorithm , a theorem of Aharoni, Berger, and Ziv [1] on PITs in weighted graphs, and an algorithmic version of the LLL from [30]. The overall proof of Theorem 3 has three phases, carried out in Sections 24. In the first phase, in Section 2, we develop an algorithm which gives an algorithmic version of the Aharoni, Berger, Ziv result for weighted PITs. This algorithm has two severe limitations: it only works for fixed and it is only pseudo-polynomial-time, i.e. its runtime is polynomial in but exponential in the number of bits of precision used to specify .

In the second phase, discussed in Section 3, we use the LLL to sparsify the graph. Given partitioned into blocks of size , where is some fixed constant, this effectively reduces the blocksize and the degree to constant values (depending on ), at which point we can use . Unfortunately, there are a number of error terms that accumulate in this process, including concentration losses as well as quantization errors needed to handle the pseudo-polynomial algorithm. As a consequence, this process only gives an IT of weight , where is some arbitrarily small constant.

In the third phase, carried out in Section 4, we overcome this limitation by “oversampling” the high-weight vertices, giving the final result of Theorem 3

. For maximum generality, we analyse this in terms of a linear programming (LP) formulation of weighted PITs considered in

[1]. This also gives an efficient version of an LP rounding result of [1].

In Section 5, we demonstrate the use of Theorem 3 by providing algorithms for finding ITs which avoid a given set of vertices as long as . Such additional restrictions are required in a number of constructions [31, 27]. Our algorithms for weighted independent ITs lead themselves almost immediately to efficient algorithms for such constructions. Notably, these constructions do not themselves overtly involve the use of weighted ITs.

In Section 6, we provide algorithms for strong colourings of graphs. For a positive integer , we say that is strongly -colourable with respect to a vertex partition if there is a proper vertex colouring of with colours so that no two vertices in the same block receive the same colour. If is strongly -colourable with respect to every vertex partition of into blocks of size at least , we say that is strongly -colourable. The strong chromatic number of a graph , denoted , is the minimum such that is strongly -colourable. This notion was introduced independently by Alon [2, 4] and Fellows [11] and has been widely studied [12, 29, 24, 8, 1, 28, 25, 27].

The best general bound for the strong chromatic number in terms of the maximum degree that is currently known is , proved in [24]. (See also [25] for an asymptotically better bound.) In [1], Aharoni, Berger, and Ziv give a simpler proof showing the bound . However, none of these results give an efficient algorithm for strong colouring. It is conjectured (see e.g. [1]) that the correct general bound is , which would be best possible if true [32].

Using the algorithm , an algorithm for strong colouring with colours was given in [14], but, as before, this algorithm is efficient only when is constant. In Section 6 we use Theorem 3 to give an efficient algorithm for strong colouring with colours for any fixed , with no restrictions on .

There is a natural notion of fractional strong chromatic number (see Section 6.1), for which the corresponding fractional version of the above conjecture was proven in [1]. Again this proof is not algorithmic. We use Theorem 3 to give an algorithmic version of this result, with asymptotically the same upper bound.

### 1.1 Notation

Given a graph partitioned into blocks and a vertex , we let denote the unique block with . We let denote the set of neighbours of in , i.e. the set of vertices with , and let .

For a weight function , we define . For a vertex set , we define . With a slight abuse of notation, if we have a vertex partition on , then for a vertex we write , i.e. is the maximum weight of any vertex in the same block as .

## 2 FindWeightPIT

Let us consider a graph with a vertex partition , and an integer-valued weight function . The main building block for our algorithms is an algorithm to find a weighted PIT. This uses the main ideas from a construction of Aharoni, Berger, and Ziv in [1], and includes as a subroutine. Here, is a parameter whose role we will discuss later.

1:function FindWeightPIT()
2:     Define and
3:     Apply to the graph with vertex partition and parameters
4:     if  returns an IT  then
5:         return
6:     else returns and containing constellation .
7:         for all  do
8:         Recursively call .
9:
10:         while there is some vertex with  do
11:              Choose such a vertex arbitrarily.
12:              Update
13:         return

This leads to the following algorithmic result.

###### Theorem 5.

The algorithm takes as inputs a graph with , with a vertex partition whose blocks have size at most , and weight function , and finds a PIT in of weight at least . For fixed , the runtime is .

In the next section, we will analyse to show that it satisfies Theorem 5.

### 2.1 Analysis of FindWeightPIT

When is applied to a graph , weight function , and blocks , it will generate a series of recursive calls. All of these calls use the same graph and same blocks , but use different weight functions . Thus, to simplify our notation, we assume that the graph and blocks are fixed, along with a fixed value such that and for all blocks .

Define , which is the parameter used for , so is also a fixed parameter. Note that is indeed -claw-free (for any partition) for parameter .

Let and . Define the index of to be . Note that is a non-negative integer of size at most .

###### Proposition 6.

In line 8 we have .

###### Proof.

In order to reach line 8, must return a non-empty set of blocks and vertex set dominating . Line 2 ensures that for all .

As dominates , each vertex for has . Hence for all such (line 7). Since , this implies that

 w′⟨v⟩=maxu∈V(v)w′(u)≤maxu∈V(v)w(u)−1

Thus for all . As and for , this implies

 Ind(w′)=∑U∈Vmax(0,w′⟨U⟩)<∑U∈Vmax(0,w⟨U⟩)=Ind(w).\qed
###### Lemma 7.

terminates in time .

###### Proof.

Recall that is a fixed constant. By Proposition 6, each call generates a recursive subproblem with strictly. Since , the total number of recursive subproblems is at most . Each subproblem is on the same graph and blocks and some new weight function .

In line 7, the entries of are changed by an additive factor of at most . There are at most subproblems, so every resulting weight function has for all . In particular, . Therefore, it suffices to show that the execution of , aside from the recursive call at line 8, runs in time .

Most of the lines of are clearly time and all the arithmetic operations on can also be easily done in time. Note that is also -claw-free with respect to and so (recalling that and are defined in terms of ) runs in time.

The only non-trivial thing to check is that the loop at line 10 terminates after iterations. Each execution of line 12 moves a vertex of into , and the vertex of that is removed (if any) is not in because is a PIT (and hence each block contains at most one element of ). Thus each iteration of the loop increases by one, so it terminates after at most iterations. ∎

###### Lemma 8.

The set returned by is a PIT of with respect to .

###### Proof.

If returns an IT, then is defined in line 5 to be the IT returned by . Since is an induced subgraph of and the blocks of are subsets of the blocks of , this set is a PIT of with respect to .

When returns a set of blocks and a set of vertices, is recursively applied to obtain a set (line 8). By Lemma 6 and induction on the index, we know that is a PIT with respect to and hence also .

The set remains a partial transversal throughout the loop at line 10 because line 12 adds a vertex to and removes the vertex of in the same block as (if it exists).

To check that is independent, note that each time is modified in line 12, the new vertex satisfies . Thus is a PIT of with respect to . ∎

This shows that terminates quickly, and returns a PIT. It remains to show that the resulting PIT has high weight. We first show a few preliminary results.

###### Proposition 9.

The value of does not decrease during any iteration of the loop at line 10.

###### Proof.

Let be the vertex chosen in line 11, and let . By the definition of (line 9), we know that and that has exactly one neighbour in . Thus and . Since is integer-valued, it must be that and .

In line 12, we update by adding and removing the vertex set . We need to show that . Since is a PIT, we know that .

Suppose that . In this case, we immediately have .

Suppose that for . In this case, since , we also have . Since dominates , this means that has at least one neighbour in , which implies that .

Suppose finally that for . So . Since is integer-valued, this implies that , and so . ∎

###### Proposition 10.

Suppose that reaches line 6, and let and be the resulting sets. Then the output of satisfies

 ∑v∈M|N(v)∩D|>(1−116b)(|B|−1).
###### Proof.

Let be the set generated in line 9. Because of the termination condition of the loop at line 10, we know that that each vertex of has an edge to some vertex . Since , we know that is independent and so . This implies that there are at least edges from to . Since , this in turn shows that .

Next, consider some vertex . Since dominates and , there is at least one vertex in , and hence . This implies that .

Putting these bounds together, we have

 ∑v∈M|N(v)∩D|=∑v∈M∖Y|N(v)∩D|+∑v∈M∩Y|N(v)∩D|≥|Y∖M|+|Y∩M|=|Y|.

To complete the proof, we will show that . To see this, note that

 |Y|≥|Leaf(K)|−|{v∈Leaf(K):W(v)∉B or |N(v)∩D|≠1}|.

By Theorem 1, is a constellation for some and thus . Since is a PIT, the set has size at most .

Each vertex in has exactly one neighbour in . Thus, if has more than one neighbour in , then it has a neighbour in . (Since dominates , it must have at least one neighbour in .) Theorem 1 ensures that . Since , there are fewer than edges from to . Thus, there are fewer than vertices with .

Recall . Thus, we have

 |Y|>(|B0|−1)−(|B0|−|B|)−bϵ2(|B|−1)=(1−116b)(|B|−1).\qed

We are now ready to prove that the PIT returned by has the desired weight.

For , we have .

###### Proof:.

We prove the statement by induction on .

If , then and so returns as its PIT. Hence . However, implies that for all and so . Thus .

For the inductive step, suppose . In line 6, runs on inputs . There are two cases.

###### Case 1.

returns a PIT .

Then we have

 w(G)b=∑v∈V(G)w(v)b=1b∑U∈V∑v∈Uw(v)≤1b∑U∈V|U|w⟨U⟩≤∑U∈Vw⟨U⟩.

is an IT on the subgraph with vertex partition . The elements of precisely correspond to the blocks such that . Hence . Thus , which is returned by , is a PIT of of weight at least .

###### Case 2.

returns and (i.e. lines 6-12 are executed).

Then dominates and contains the vertices of some constellation for , where . Let

be the weight vector defined in line

7. By Lemma 8, the recursive call returns a PIT at line 8. By Proposition 6, we have . Hence, by the induction hypothesis, we have .

By Proposition 9, the value of does not decrease during the loop at line 10. So the final output also satisfies

 w′(M)≥w′(G)b=1b∑vw′(v)=1b∑vw(v)−1b∑v|N(v)∩D|=1b(w(G)−∑v|N(v)∩D|).

As and , we have . Since and both and are integers, we have and so

 w′(M)>w(G)b−(2+ϵ)b−12(|B|−1)b=w(G)b−(1+116b2)(1−1b)(|B|−1).

Also, we have . By Proposition 10, this is at least . Thus,

 w(M) =w′(M)+(w(M)−w′(M)) >(w(G)b−(1+116b2)(1−1b)(|B|−1))+(1−116b)(|B|−1) =w(G)b+(|B|−1)((1−116b)−(1+116b2)(1−1b)).

A simple calculation now shows that and so . ∎

Theorem 5 follows from Lemmas 78, and 11.

### 2.2 Finding an Independent Transversal

With a few preprocessing steps and quantization steps, we can use to find full ITs.

###### Proposition 12.

There is an algorithm that takes as inputs any graph partitioned into blocks of size exactly with , and weight function , and finds an IT in of weight at least . For fixed , the runtime is .

###### Proof.

Apply with parameter to with weight function defined by . This returns a PIT of weight at least . For fixed the runtime is ; note that .

We claim that is in fact a full transversal. If not, then

 w′(T)≤(m−1)(maxuw′(u))≤(m−1)(|w|+m|w|)=(m2−1)|w|.

On the other hand, we have Thus must be a full transversal.

Since is a full transversal, we have . We have , and so as desired. ∎

The next lemma removes the runtime dependence on in Proposition 12, at the cost of weakening the conclusion slightly.

###### Lemma 13.

There is an algorithm that takes as inputs , any graph partitioned into blocks of size exactly , with and weight function , and finds an IT in of weight at least . For fixed , the runtime is .

###### Proof.

If , then for all vertices and so is integral; in this case we can apply Proposition 12 directly. So suppose that . Now define . We define a new weight function by for each . We apply the algorithm of Lemma 12 to with the weight function . Note that . So for fixed , the runtime is polynomial in and .

This algorithm generates an IT of with . Here

 w′(G)=∑v∈V(G)⌊αw(v)⌋>∑v∈V(G)(αw(v)−1)=αw(G)−n.

Therefore

 w(M)=∑v∈Mw(v)≥∑v∈Mw′(v)α=w′(M)α≥w′(G)bα>w(G)b−nbα=w(G)b(1−η).\qed

Note that the algorithm of Lemma 13 takes as input a real-valued weight function. To be more rigorous, the weight function should be discretized to some finite precision, and the algorithm would have some dependence on the number of bits of precision used. We ignore such numerical issues here and in the remainder of the paper for simplicity of exposition.

## 3 Degree Reduction

The next step in the proof is to remove the condition that is constant. Our main tool for this is the LLL, in particular the constructive LLL algorithm of Moser and Tardos [30]. The basic idea will be to use the LLL for a “degree-splitting” algorithm: we reduce the degree, the block-size, and the total vertex weight of the graph by a factor of approximately half. By iterating this times, we scale down the original graph to a graph with constant degree, at which point we can apply Lemma 13 to find an IT.

Let us begin by reviewing the algorithm of Moser and Tardos.

###### Theorem 14 ([30]).

There is a randomized algorithm which takes as input a probability space

in independent variables along with a collection of “bad” events in that space, wherein each bad-event is a Boolean function of a subset of the variables .

If the algorithm terminates, it outputs a configuration such that all bad-events are false on .

If, for an input , there are parameters such that and such that each bad-event has probability at most and has at most other bad-events such that , then the algorithm terminates with probability one and the expected runtime is polynomial in and .

One additional feature of this algorithm is critical for our application to weighted ITs. The Moser-Tardos algorithm is a randomized process which terminates with probability one in some configuration . This configuration

can be regarded as a random variable, and we refer to its distribution as the

MT-distribution. This was first analysed in [16], with additional bounds shown in [21]. One result of [21], which we present in a simplified form, is the following.

###### Theorem 15 ([21]).

Suppose that satisfy the conditions of Theorem 14. Let be an event in the probability space which is a Boolean function of a subset of variables . Suppose that there are bad-events with the property that . Then the probability that event holds in the MT-distribution is at most .

Using the Moser-Tardos algorithm, we get the following degree-splitting algorithm.

###### Lemma 16.

There is a randomized algorithm that takes as input a parameter , a graph with blocks of size and , and a weight function , such that . It generates a vertex set , such that for every block we have , the induced subgraph has maximum degree at most , and . The expected runtime is .

###### Proof.

We will use Theorem 14 for this construction. The probability space has a variable for each block ; the distribution of is to select at random a subset of size exactly . For each vertex , we have a bad-event that has more than neighbours in .

We need to calculate the parameters and . Consider some vertex and bad-event . This bad-event is affected by the choice of for each neighbour of . Each choice of , in turn, affects for any vertex with a neighbour in . In total, can affect at most other vertices. Hence and ensure that this is at most .

Next, we examine the probability of . If , then the degree of in is the sum , where is the indicator that . It is not hard to see that the random variables are negatively correlated, and that has mean . Thus ensures that this is at most .

We will apply Chernoff’s bound (which applies to sums of negatively correlated random variables), to compute the probability that . Note that is an upper bound on the true mean of , and value represents a multiplicative deviation of over the value . One can easily verify that for , so we use the cruder Chernoff bound formula:

 Pr(Y≥^μ(1+δ))≤e−^μδ2/3,

where is an upper bound on . Here, simple analysis shows that for , we have , and so overall, the bad-event has probability at most .

Thus and , and so . By Theorem 14, then, the Moser-Tardos algorithm generates, in polynomial time, a configuration avoiding all such bad-events . By construction, the graph is partitioned into blocks of size exactly and has maximum degree at most .

It remains to analyse . By Theorem 15, for any block and fixed set , the probability of an event in the MT-distribution is at most times its probability in the original probability space , where is the number of bad-events affecting event . The original sampling probability is

from the uniform distribution and we have already seen that the number of events affecting

is at most . Thus,

 Pr(XU=A)≤ee^Δ−66×3^Δ3(bb′).

Simple analysis shows that for , we have . Therefore, we have

 E[w(XU)] =w(U)−∑A⊆U|A|=b′Pr(XU=A)w(U∖A)≥w(U)−1+^Δ−11(bb′)∑A⊆U|A|=b′w(U∖A) =w(U)−1+^Δ−11(bb′)∑u∈Uw(u)∑A⊆U|A|=b′u∉A1=w(U)−1+^Δ−11(bb′)w(U)(b−1b′) =w(U)(1−(1−b′b)(1+^Δ−11))≥w(U)(b′b−^Δ−11).

Summing over all blocks , and noting , this gives

 E[w(G′)]≥w(G)(b′b−^Δ−11)≥b′bw(G)(1−2^Δ−11).

We can repeat this process until we get a configuration with the desired weight . Since , the number of repetitions of this process needed until this occurs is . Each iteration of this procedure has expected runtime . ∎

###### Lemma 17.

There is a randomized algorithm that takes as input parameters , a graph partitioned into blocks of size exactly for some , and a weight function . It generates an IT of with

 w(M)≥w(G)b(1−λ).

For fixed and , the expected runtime is .

###### Proof.

Let us define . Our strategy will to be repeatedly apply Lemma 16 for