    # Applying Convex Integer Programming: Sum Multicoloring and Bounded Neighborhood Diversity

In the past 30 years, results regarding special classes of integer linear (and, more generally, convex) programs ourished. Applications in the field of parameterized complexity were called for and the call has been answered, demonstrating the importance of connecting the two fields. The classical result due to Lenstra states that solving Integer Linear Programming in Fixed dimension is polynomial. Later, Khachiyan and Porkolab has extended this result to optimizing a quasiconvex function over a convex set. While applications of the former result have been known for over 10 years, it seems the latter result has not been applied much in the parameterized setting yet. We give one such application. Specifically, we deal with the Sum Coloring problem and a generalization thereof called Sum-Total Multicoloring, which is similar to the preemptive Sum Multicoloring problem. In Sum Coloring, we are given a graph G = (V,E) and the goal is to find a proper coloring c V→N minimizing ∑_v∈ V c(v). By formulating these problems as convex integer programming in small dimension, we show fixed-parameter tractability results for these problems when parameterized by the neighborhood diversity of G, a parameter generalizing the vertex cover number of G.

## Authors

10/27/2021

### Structural Parameterizations of Budgeted Graph Coloring

We introduce a variant of the graph coloring problem, which we denote as...
09/08/2017

### Mixed Integer Programming with Convex/Concave Constraints: Fixed-Parameter Tractability and Applications to Multicovering and Voting

A classic result of Lenstra [Math. Oper. Res. 1983] says that an integer...
01/22/2020

### Iterated Type Partitions

This paper deals with the complexity of some natural graph problems when...
11/08/2019

### Structural Parameterizations for Equitable Coloring

An n-vertex graph is equitably k-colorable if there is a proper coloring...
11/03/2017

### Strengthening Convex Relaxations of 0/1-Sets Using Boolean Formulas

In convex integer programming, various procedures have been developed to...
02/17/2022

### Extended MSO Model Checking via Small Vertex Integrity

We study the model checking problem of an extended 𝖬𝖲𝖮 with local and gl...
10/27/2019

### Minimizing a Sum of Clipped Convex Functions

We consider the problem of minimizing a sum of clipped convex functions;...
##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

Our focus is on modeling various problems as integer programming (IP), and then obtaining algorithms by applying known algorithms for IP. IP is the problem

 min{f(\boldmathx)∣\boldmathx∈S∩Zn,S⊆Rn is convex}. (IP)

We give special attention to two restrictions of IP. First, when is a polyhedron, we get

 min{f(\boldmathx)∣A\boldmathx≤\boldmath% b,\boldmathx∈Zn}, (LinIP)

where and ; we call this problem linearly-constrained IP, or LinIP. Further restricting to be a linear function gives Integer Linear Programming (ILP):

 min{\boldmathw\boldmathx∣A\boldmathx≤\boldmathb,\boldmathx% ∈Zn}, (ILP)

where . The function is called the objective function, is the feasible set (defined by constraints or various oracles), and

is a vector of

(decision) variables. By we denote the binary encoding length of numbers, vectors and matrices.

In 1983 Lenstra showed that ILP is polynomial in fixed dimension and solvable in time (including later improvements [30, 50, 60]). Two decades later this algorithm’s potential for applications in parameterized complexity was recognized, e.g. by Niedermeier :

[…] It remains to investigate further examples besides Closest String where the described ILP approach turns out to be applicable. More generally, it would be interesting to discover more connections between fixed-parameter algorithms and (integer) linear programming.

This call has been answered in the following years, for example in the context of graph algorithms [27, 28, 33, 58], scheduling [42, 49, 52, 67] or computational social choice .

In the meantime, many other powerful algorithms for IP have been devised; however it seemed unclear exactly how could these tools be used, as Lokshtanov states in his PhD thesis , referring to algorithms for convex IP in fixed dimension:

It would be interesting to see if these even more general results can be useful for showing problems fixed parameter tractable.

Similarly, Downey and Fellows  highlight the algorithm for so called -fold IP:

Conceivably, [Minimum Linear Arrangement] might also be approached by the recent (and deep) FPT results of Hemmecke, Onn and Romanchuk  concerning nonlinear optimization.

Interestingly, Minimum Linear Arrangement was shown to be by yet another new algorithm for IP due to Lokshtanov .

In the last 3 years we have seen a surge of interest in, and an increased understanding of, these IP techniques beyond Lenstra’s algorithm, allowing significant advances in fields such as parameterized scheduling [11, 42, 47, 52, 67], computational social choice [53, 54, 56], multichoice optimization , and stringology . This has increased our understanding of the strengths and limitations of each tool as well as the modeling patterns and tricks which are typically applicable and used.

### 1.1 Our Results

We start by giving a quick overview of existing techniques in Section 2, which we hope to be an accessible reference guide for parameterized complexity researchers. Then, we resolve the parameterized complexity of three problems when parameterized by the neighborhood diversity of a graph (we defer the definitions to the relevant sections). However, since our complexity results follow by applying an appropriate algorithm for IP, we also highlight our modeling results. Moreover, in the spirit of the optimality program (introduced by Marx ), we are not content with obtaining some algorithm, but we attempt to decrease the dependence on the parameter as much as possible. This sometimes has the unintended consequence of increasing the polynomial dependence on the graph size . We note this and, by combining several ideas, get the “best of both worlds”. Driving down the factor is in the spirit of “minding the ” of Lokshtanov et al. .

We denote by the number of vertices of the graph and by its neighborhood diversity; graphs of neighborhood diversity have a succinct representation (constructible in linear time) with bits and we assume to have such a representation on input.

Capacitated Dominating Set

1. Has a convex IP model in variables and can be solved in time and space .

2. Has an ILP model in variables and constraints, and can be solved in time and space .

3. Can be solved in time using model a and a proximity argument.

4. Has a polynomial approximation algorithm by rounding a relaxation of a.

Sum Coloring

1. Has an -fold IP model in variables and constraints, and can be solved in time .

2. Has a LinIP model in variables and constraints with a non-separable convex objective, and can be solved in time .

3. Has a LinIP model in variables and constraints whose constraint matrix has dual treewidth and whose objective is separable convex, and can be solved in time .

Max--Cut has a LinIP model with an indefinite quadratic objective and can be solved in time for some computable function .

### 1.2 Related Work

Graphs of neighborhood diversity constitute an important stepping stone in the design of algorithms for dense graphs, because they are in a sense the simplest of dense graphs [2, 3, 7, 28, 33, 35, 66]. Studying the complexity of Capacitated Dominating Set on graphs of bounded neighborhood diversity is especially interesting because it was shown to be -hard parameterized by treewidth by Dom et al. . Sum Coloring was shown to be parameterized by treewidth ; its complexity parameterized by clique-width is open as far as we know. Max--Cut is parameterized by and treewidth (by reduction to CSP), but -hard parameterized by clique-width .

### 1.3 Preliminaries

For positive integers with we set and . We write vectors in boldface (e.g., ) and their entries in normal font (e.g., the -th entry of  is ). For an integer , we denote by the binary encoding length of ; we extend this notation to vectors, matrices and tuples of these objects. For example, , and . For a graph  we denote by its set of vertices, by the set of its edges, and by the (open) neighborhood of a vertex . For a matrix we define

• the primal graph , which has a vertex for each column and two vertices are connected if there exists a row such that both columns are non-zero, and,

• the dual graph , which is the above with rows and columns swapped.

We call the treedepth and treewidth of the primal treedepth and primal treewidth , and analogously for the dual treedepth and dual treewidth .

We define a partial order on as follows: for we write and say that is conformal to if (that is, and lie in the same orthant) and for all . It is well known that every subset of has finitely many -minimal elements.

[Graver basis] The Graver basis of is the finite set of -minimal elements in .

#### Neighborhood Diversity.

Two vertices are called twins if . The twin equivalence is the relation on vertices of a graph where two vertices are equivalent if and only if they are twins. [Lampis ] The neighborhood diversity of a graph , denoted by , is the minimum number of classes (called types) of the twin equivalence of .

We denote by the classes of twin equivalence on for . A graph with can be described in a compressed way using only space by its type graph, which is computable in linear time : The type graph of a graph is a graph on vertices , where each is assigned weight , and where is an edge or a loop in if and only if two distinct vertices of and are adjacent.

#### Modeling.

Loosely speaking, by modeling an optimization problem as a different problem we mean encoding the features of by the features of , such that the optima of encode at least some optima of . Modeling differs from reduction by highlighting which features of are captured by which features of .

In particular, when modeling as an integer program, the same feature of can often be encoded in several ways by the variables, constraints or the objective. For example, an objective of may be encoded as a convex objective of the IP, or as a linear objective which is lower bounded by a convex constraint; similarly a constraint of may be modeled as a linear constraint of IP or as minimizing a penalty objective function expressing how much is the constraint violated. Such choices greatly influence which algorithms are applicable to solve the resulting model. Specifically, in our models we focus on the parameters #variables (dimension), #constraints, the largest coefficient in the constraints (abusing the notation slightly when the constraints are not linear), the largest right hand side , the largest domain , and the largest coefficient of the objective function (linear objectives), (quadratic objectives) or (in general), and noting other relevant features.

#### Solution structure.

We concur with Downey and Fellows that and structure are essentially one . Here, it typically means restricting our attention to certain structured solutions and showing that nevertheless such structured solutions contain optima of the problem at hand. We always discuss these structural properties before formulating a model.

## 2 Integer Programming Toolbox

We give a list of the most relevant algorithms solving IP, highlighting their fastest known runtimes (marked ), typical use cases and strengths (), limitations (), and a list of references to the algorithms () and their most illustrative applications (), both in chronological order. We are deliberately terse here and defer a more nuanced discussion to Appendix A.

### 2.1 Small Dimension

The following tools generally rely on results from discrete geometry. Consider for example Lenstra’s theorem: it can be (hugely) simplified as follows. Let ; then we can decide whether by the following recursive argument:

1. Either the volume of is too large not to contain an integer point by Minkowski’s first theorem,

2. or the volume of is small and must be “flat” in some direction by the flatness theorem; then, we can cut up into few lower-dimensional slices and recurse into these.

Being able to optimize an objective then follows from testing feasibility by binary search.

ILP in small dimension. Problem (ILP) with small .

• *[]  [50, 30] *[] Can use large coefficients, which allows encoding logical connectives using Big- coefficients . Runs in polynomial space. Most people familiar with ILP. *[] Small dimension can be an obstacle in modeling polynomially many “types” of objects [8, Challenge #2]. Models often use exponentially many variables in the parameter, leading to double-exponential runtimes (applies to all small dimension techniques below). Encoding a convex objective or constraint requires many constraints (cf. Model 3). Big- coefficients are impractical. *[]Lenstra , Kannan , Frank and Tardos  *[] Niedermeier (Closest String Fellows et al. (graph layout problems)  Jansen and Solis-Oba (scheduling; MILP column generation technique) , Fiala et al. (graph coloring) , Faliszewski et al. (computational social choice; big- coefficients to express logical connectives) .

Convex IP in small dimension. Problem (IP) with a convex function; can be represented by polynomial inequalities, a first-order oracle, a separation oracle, or as a semialgebraic set.

• *[] , where is contained in a ball of radius  . *[] Strictly stronger than ILP. Representing constraints implicitely by an oracle allows better dependence on instance size (cf. Model 3). *[] Exponential space. Algorithms usually impractical. Proving convexity can be difficult. *[] Grötschel, Lovász, and Schrijver [36, Theorem 6.7.10] (weak separation oracle), Khachiyan and Porkolab  (semialgebraic sets), Heinz , whose algorithm is superseded by Hildebrand and Köppe  (polynomials), Dadush, Peikert and Vempala  randomized and Dadush and Vempala  (strong separation oracle), Oertel, Wagner, and Weismantel  reduction to Mixed ILP subproblems (first-order oracle). *[] Hermelin et al. (multiagent scheduling; convex constraints) , Bredereck et al. (bribery; convex objective) , Mnich and Wiese, Knop and Koutecký (scheduling; convex objective) [67, 52], Knop et al. (various problems; convex objectives) , Model 3

Indefinite quadratic IP in small dimension. Problem (LinIP) with indefinite (non-convex) quadratic.

• *[]   *[] Currently the only tractable indefinite objective. *[] Limiting parameterization. *[] Lokshtanov , Zemmer  *[] Lokshtanov (Optimal Linear Arrangement , Model 4

Parametric ILP in small dimension. Given a , decide

 ∀\boldmathb∈Q∩Zm∃% \boldmathx∈Zn:A\boldmathx≤\boldmathb.
• *[]   *[] Models one quantifier alternation. Useful in expressing game-like constraints (e.g., “ moves a counter-move”). Allows unary big- coefficients to model logic [56, Theorem 4.5]. *[] Input has to be given in unary (vs. e.g. Lenstra’s algorithm). *[] Eisenbrand and Shmonin [24, Theorem 4.2], Crampton et al. [15, Corollary 1] *[] Crampton et al. (resiliency) , Knop et al. (Dodgson bribery) 

### 2.2 Variable Dimension

In this section it will be more natural to consider the following standard form of (LinIP)

 min{f(\boldmathx)∣A\boldmathx=\boldmathb,\boldmathl≤\boldmathx≤\boldmathu,\boldmathx∈Zn}, (SLinIP)

where and . Let . In contrast with the previous section, the following algorithms typically rely on algebraic arguments and dynamic programming. The large family of algorithms based on Graver bases (see below) can be described as iterative augmentation methods, where we start with a feasible integer solution and iteratively find a step such that is still feasible and improves the objective. Under a few additional assumptions on it is possible to prove quick convergence of such methods.

ILP with few rows. Problem (SLinIP) with small and a linear objective for .

• *[] if and , and in general  *[] Useful for configuration IPs with small coefficients, leading to exponential speed-ups. Best runtime in the case without upper bounds. Linear dependence on . *[] Limited modeling power. Requires small coefficients. *[] Papadimitriou , Eisenbrand and Weismantel , Jansen and Rohwedder  *[] Jansen and Rohwedder (scheduling) 

 Anfold=⎛⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜⎝A1A1⋯A1A20⋯00A2⋯0⋮⋮⋱⋮00⋯A2⎞⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟⎠
 Astoch=⎛⎜ ⎜ ⎜ ⎜ ⎜⎝B1B20⋯0B10B2⋯0⋮⋮⋮⋱⋮B100⋯B2⎞⎟ ⎟ ⎟ ⎟ ⎟⎠

-fold IP, tree-fold IP, and dual treedepth. -fold IP is problem (SLinIP) in dimension , with for some two blocks and , , , and with a separable convex function, i.e., with each convex. Tree-fold IP is a generalization of -fold IP where the block is itself replaced by an -fold matrix, and so on, recursively, times. Tree-fold IP has bounded .

• *[] -fold IP [1, 23]; for (SLinIP. *[] Variable dimension useful in modeling many “types” of objects [54, 56]. Useful for obtaining exponential speed-ups (not only configuration IPs). Seemingly rigid format is in fact not problematic (blocks can be different provided coefficients and dimensions are small). *[] Requires small coefficients. *[] Hemmecke et al. , Knop et al. , Chen and Marx , Eisenbrand et al. , Altmanová et al. , Koutecký et al.  *[] Knop and Koutecký (scheduling with many machine types) , Knop et al. (bribery with many voter types) [54, 53], Chen and Marx (scheduling; tree-fold IP) , Jansen et al. (scheduling EPTAS) , Model 5.1

2-stage and multi-stage stochastic IP, and primal treedepth. 2-stage stochastic IP is problem (SLinIP) with and a separable convex function; multi-stage stochastic IP is problem (SLinIP

) with a multi-stage stochastic matrix, which is the transpose of a tree-fold matrix; multi-stage stochastic IP is in turn generalized by IP with small primal treedepth

.

• *[] , computable  *[] Similar to Parametric ILP in fixed dimension, but quantification is now over a polynomial sized but possibly non-convex set of explicitely given right hand sides. *[] Not clear which problems are captured. Requires small coefficients. Parameter dependence is possibly non-elementary; no upper bounds on are known, only computability. *[] Hemmecke and Schultz , Aschenbrenner and Hemmecke , Koutecký et al.  *[] N/A

Small treewidth and Graver norms. Let and be maximum norms of elements of .

• *[]   *[] Captures IPs beyond the classes defined above (cf. Section 5.3). *[] Bounding and is often hard or impossible. [] Koutecký et al.  [] Model 5.3

## 3 Convex Constraints: Capacitated Dominating Set

Capacitated Dominating Set
Input: A graph and a capacity function .
Task: Find a smallest possible set and a mapping such that for each , .

#### Solution Structure.

Let be a linear extension of ordering of by vertex capacities, i.e., if . For and let be the set of the first vertices of in the ordering and let ; for let . Let be a solution and . We call the functions the domination capacity functions. Intuitively, is the maximum number of vertices dominated by . Observe that since is a partial sum of a non-increasing sequence of numbers, it is a piece-wise linear concave function. We say that is capacity-ordered if, for each , . The following observation allows us to restrict our attention to such solutions; the proof goes by a simple exchange argument. There is a capacity-ordered optimal solution.

###### Proof.

Consider any solution together with a mapping witnessing that is a solution. Our goal is to construct a capacity-ordered solution which is at least as good as . If itself is capacity-ordered, we are done. Assume the contrary; thus, there exists an index and a vertex such that , and consequently there exists a vertex such that .

Let be defined by setting and for each . We shall define a mapping witnessing that is again a solution. Let iff and and , let whenever and let if . Clearly for each because when , and and .

If itself is not yet a capacity-ordered solution, we repeat the same swapping argument. Observe that , i.e., is closer than to being capacity-ordered, and, the size of compared to does not increase. Finally, when , is our desired capacity-ordered solution . ∎

Observe that a capacity-ordered solution is fully determined by the sizes and rather than the actual sets , which allows modeling CDS in small dimension.

###### Model (Capacitated Dominating Set as convex IP in fixed dimension).

Variables & notation: [style=itemize](2) * = maximum #vertices dominated by if Objective & Constraints:

 min ∑i∈T(G)xi \definecolor[named]pgfstrokecolorrgb0.4,0.4,0.4\pgfsys@color@gray@stroke0.4\pgfsys@color@gray@fill0.4min|D|=∑i∈T(G)|Di| (cds:cds-obj) ∑j∈NT(G)(i)yij ≤fi(xi) ∀i∈T(G) respect capacities (cds:cap) ∑i∈NT(G)(j)yij ≥|Vj|−xj ∀j∈T(G) every v∈Vj∖Dj dominated (cds:dom) 0≤xi ≤|Vi| ∀i∈T(G) (cds:bounds)

Parameters & Notes:

• #vars #constraints
• constraint (cds:cap) is convex, since it bounds the area under a concave function, and is piece-wise linear. ∎

Then, applying for example Dadush’s algorithm  to Model 3 yields Theorem 1.1a. We can trade the non-linearity of the previous model for an increase in the number of constraints and the largest coefficient. That, combined with Lenstra’s algorithm, yields Theorem 1.1b, where we get a larger dependence on , but require only space.

###### Model (Capacitated Dominating Set as ILP in fixed dimension).

Exactly as Model 3 but replace constraints (cds:cap) with the following equivalent set of linear constraints:

 ∑ij∈E(T(G))yij ≤fi(ℓ−1)+c(vℓ)(xi−ℓ+1) ∀i∈T(G)∀ℓ∈[|Vi|] (cds:cap-lin)

The parameters then become: #vars #constraints

###### [Additive approximation] Proof of Theorem 1.1d.

Let be an optimal solution to the continuous relaxation of Model 3, i.e., we relax the requirement that are integral; note that such can be computed in polynomial time using the ellipsoid method , or by applying a polynomial LP algorithm to Model 3. We would like to round up to an integral to obtain a feasible integer solution which would be an approximation of an integer optimum. Ideally, we would take and compute accordingly, i.e., set to be smallest possible such that ; note that , since we add at most neighbors (to be dominated) in neighborhood of . However, this might result in a non-feasible solution if, for some , . In such a case, we solve the relaxation again with an additional constraint and try rounding again, repeating this aforementioned fixing procedure if rounding fails, and so on. After at most repetitions this rounding results in a feasible integer solution , in which case we have and thus the solution represented by has value at most ; the relaxation must eventually become feasible as setting for all yields a feasible solution. ∎

###### [Speed trade-offs] Proof of Theorem 1.1c.

Notice that on our way to proving Theorem 1.1d we have shown that Model 3 has integrality gap at most , i.e., the value of the continuous optimum is at most less than the value of the integer optimum. This implies that an integer optimum satisfies, for each , .

We can exploit this to improve Theorem 1.1a in terms of the parameter dependence at the cost of the dependence on . Let us assume that we have a way to test, for a given integer vector , whether it models a capacity-ordered solution, that is, whether there exists a capacitated dominating set with for each . Then we can simply go over all possible choices of and choose the best. So we are left with the task of, given a vector , deciding if it models a capacity-ordered solution.

But this is easy. Let be the assumed order and define as above. Now, we construct an auxiliary bipartite matching problem, where we put copies of each vertex from on one side of the graph, and all vertices of on the other side, and connect a copy of to if . Then, is a capacitated dominating set if and only if all vertices in can be matched. The algorithm is then simply to compute the continuous optimum , and go over all integer vectors with , verifying whether they model a solution and choosing the smallest (best) one. ∎

## 4 Indefinite Quadratics: Max q-Cut

Max--Cut
Input: A graph .
Task: A partition maximizing the number of edges between distinct and , i.e., .

#### Solution structure.

As before, it is enough to describe how many vertices from type belong to for , and their specific choice does not matter; this gives us a small dimensional encoding of the solutions.

###### Model (Max-q-Cut as LinIP with indefinite quadratic objective).

Variables & Notation: [style=itemize](2) #edges between and if . Objective & Constraints:

 min ∑α,β∈[q]:α≠β∑ij∈E(T(G))xiα⋅xjβ \definecolor[named]pgfstrokecolorrgb0.4,0.4,0.4\pgfsys@color@gray@stroke0.4\pgfsys@color@gray@fill0.4min\#edges across partites (mc:obj) ∑α∈[q]xiα =|Vi| ∀i∈T(G) (Vi∩Wα)α∈[q] partitions Vi (mc:part)

Parameters & Notes:

• #vars #constraints
• objective (mc:obj) is indefinite quadratic. ∎

Applying Lokshtanov’s  or Zemmer’s  algorithm to Model 4 yields Theorem 1.1. Note that since we do not know anything about the objective except that it is quadratic, we have to make sure that and are small.

## 5 Convex Objective: Sum Coloring

Sum Coloring
Input: A graph .
Task: A proper coloring minimizing .

In the following we first give a single-exponential algorithm for Sum Coloring with a polynomial dependence on , then a double-exponential algorithm with a logarithmic dependence on , and finally show how to combine the two ideas together to obtain a single-exponential algorithm with a logarithmic dependence on .

### 5.1 Sum Coloring via n-fold IP

#### Structure of Solution.

The following observation was made by Lampis  for the Coloring problem, and it holds also for the Sum Coloring problem: every color intersects each clique type in at most one vertex, and each independent type in either none or all of its vertices. The first follows simply by the fact that it is a clique; the second by the fact that if both colors with are used for an independent type, then recoloring all vertices of color to be of color remains a valid coloring and decreases its cost. We call a coloring with this structure an essential coloring.

###### Model (Sum Coloring as n-fold IP).

Variables & Notation: [style=itemize](2) if color intersects cost of color at a clique type * cost of color at an independent type * total cost of Objective & Constraints:

 min Snfold(\boldmathx) (sc:nf:obj) |G|∑α=1xαi =|Vi| ∀i∈T(G),Vi is clique Vi is colored (sc:nf:cliques) |G|∑α=1xαi =1 ∀i∈T(G),Vi is independent Vi is colored (sc:nf:indeps) xαi+xαj ≤1 ∀α∈[|G|]∀ij∈E(T(G)) \definecolor[named]pgfstrokecolorrgb0.4,0.4,0.4\pgfsys@color@gray@stroke0.4\pgfsys@color@gray@fill0.4\boldmathxα is independent set (sc:nf:xi-indep)

Parameters & Notes:

• #vars #constraints
• Constraints have an -fold format: (sc:nf:cliques) and (sc:nf:indeps) form the block and (sc:nf:xi-indep) form the blocks; see parameters above. Observe that the matrix is the identity matrix and the matrix is the incidence matrix of transposed. ∎

Applying the algorithm of Altmanová et al.  to Model 5.1 yields Theorem 1.1a. Model 5.1 is a typical use case of -fold IP: we have a vector of multiplicities (modeling ) and we optimize over its decompositions into independent sets of . A clever objective function models the objective of Sum Coloring. The main drawback is large number of bricks in this model.

### 5.2 Sum Coloring via Convex Minimization in Fixed Dimension

#### Structure of Solution.

The previous observations also allow us to encode a solution in a different way. Let be the set of all independent sets of ; note that . Then we can encode an essential coloring of by a vector of multiplicities of elements of such that there are colors which color exactly the types contained in . The difficulty with Sum Coloring lies in the formulation of its objective function. Observe that given an , the number of vertices every color class of this type will contain is independent of the actual multiplicity . Define the size of a color class as .

Let be a graph and let be a proper coloring of minimizing . Let denote the quantity . Then for every .

###### Proof.

Suppose for contradiction that we have with . We now construct a proper coloring of as follows

 c′(v)=⎧⎪⎨⎪⎩pif c(v)=q,qif c(v)=p,c(v)otherwise.

Clearly is a proper coloring. Now we have

 ∑v∈Vc(v)= (∑v∈Vc′(v))−pμ(q)−qμ(p)+pμ(p)+qμ(q)= (∑v∈Vc′(v))−p(μ(q)−μ(p))+q(μ(q)−μ(p))= (∑v∈Vc′(v))+(μ(q)−μ(p))(q−p)>∑v∈Vc′(v).

Here the last inequality holds, since both the factors following the sum are positive due to our assumptions. Thus we arrive at a contradiction that is a coloring minimizing the first sum. ∎