An Exact Approach for the Balanced k-Way Partitioning Problem with Weight Constraints and its Application to Sports Team Realignment

09/05/2017 ∙ by Diego Recalde, et al. ∙ Universidad Nacional de Rosario 0

In this work a balanced k-way partitioning problem with weight constraints is defined to model the sports team realignment. Sports teams must be partitioned into a fixed number of groups according to some regulations, where the total distance of the road trips that all teams must travel to play a Double Round Robin Tournament in each group is minimized. Two integer programming formulations for this problem are introduced, and the validity of three families of inequalities associated to the polytope of these formulations is proved. The performance of a tabu search procedure and a Branch & Cut algorithm, which uses the valid inequalities as cuts, is evaluated over simulated and real-world instances. In particular, an optimal solution for the realignment of the Ecuadorian Football league is reported and the methodology can be suitable adapted for the realignment of other sports leagues.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

A fundamental problem in Combinatorial Optimization is to partition a graph in several parts. There are many different versions of graph partitioning problems depending on the number of parts required, the type of weights on the edges or nodes, and the inclusion of several other constraints like restricting the number of nodes in each part. Usually, the objective of these problems is to divide the set of nodes into subsets with a strong internal connectivity and a weak connectivity between them. Most versions of these problems are known to be NP-hard. In this paper, a problem consisting of partitioning a complete or a general graph in a fixed number of subsets of nodes such that their cardinality differ at most in one and the total weight of each subset is bounded, is introduced. The objective aims to minimize the total cost of edges with end-nodes in the same subset. The motivation to state this problem was the realignment of the second category of the Ecuadorian football league.

The sports team realignment deals with partitioning a fixed number of professional sports teams into groups or divisions of similar characteristics, in which a tournament is played. Commonly, a geographical criterion is used to construct the divisions in order to minimize the sum of intra-divisional travel costs. The Ecuadorian football federation (FEF) adapted the last benchmark and imposes that the realignment of the second category of the professional football league must be made by considering provinces instead of teams. Thus, in the realignment, the provinces are divided into four geographical zones. In each zone, two subgroups of teams are randomly constructed by regarding the two best teams of each province, satisfying that two teams of the same province do not belong to the same subgroup, and every subgroup must have the same number of best and second-best teams whenever possible. The eight subgroups play a Double Round Robin Tournament, where the champion of each subgroup and the four second-best teams with the highest scores advance to another stage of the championship. In 2014, FEF managers requested to the authors of this work how to make the realignment of the second category of the Ecuadorian football league in an optimal way, considering 21 provinces and 42 teams to be partitioned in 4 groups. Prior to this requirement, they made a realignment in which the total road travel distance was 39830.4 km. However, the optimal solution reduced this road travel distance in 1532.4 km with the corresponding highly relevant logistics and economical benefits for the large number of people involved. A graphical representation is depicted in Figure 1. In a preliminary work ISCO2016, the problem was modeled as a -clique partitioning problem with constraints on the sizes and weights of the cliques and formulated as an integer program. Unfortunately, there was an instance related to a proposal to change the design of the realignment that could not be solved to optimality. Consequently, the present paper aims to improve the previous practical results and provide more theoretical insightful about this problem.

Figure 1: Empirical vs optimal solution, 2014 edition

The proposal for realignment the second category of the Ecuadorian football league addressed in the previous work, and revisited in this paper, consists of directly making the divisions by using teams instead of provinces, as it is done in other international leagues. During the 2015 season of the second category of the Ecuadorian football league, 44 teams participated (22 provincial associations) and the set of teams was divided into groups with teams and groups with teams. In the context of this problem, the distance between two teams is defined as the road trip distance between the venues of the teams. The strongness or weakness of a team is quantified by means of a parameter that measures its football level considering the historical performance of each team. Thus, the problem studied in this paper consists of partitioning the set of teams into groups such that the number of teams in each group differs at most by one, there exists a certain homogeneity of football performance among the teams in each class of the partition, and minimizing an objective function corresponding to the total geographical distance intra-group.

The sports team realignment problem has been modeled in different ways and for different leagues in other countries. A quadratic binary programming model is set up to divide 30 teams, of the National Football League (NFL) in the United States, into 6 compact divisions of 5 teams each (Saltzman1996). The results, obtained directly from a nonlinear programming solver, are considerably less expensive for the teams in terms of total intra-divisional travel, in comparison with the realignment of the 1995 edition of this league. On the other hand, MacDonaldPulleyblank2014 propose a geometric method to construct realignments for several sports leagues in the United States: NHL, MLB, NFL and NBA. The authors claim that with their approach they always find the optimal solution. To prove this, they solve mixed integer programming problems corresponding to practical instances, using CPLEX.

In the case that it is possible to divide the teams into divisions of equal size, this problem can be modeled as a -way equipartition problem: given an undirected graph with nodes and edge costs, the problem consists of finding a -partition of the set of nodes, each of the same size, such that the total cost of edges which have both endpoints in one of the subsets of the partition is minimized. An example of this case was given by Mitchell2001, who optimally solved the realignment of the NFL in the United States for teams and divisions by using a branch-and-cut algorithm. Moreover, the author shows that the 2002 edition of the NFL could have reduced the sum of intra-divisional travel distances by .

When , the sports team realignment problem can be modeled as a Clique Partitioning Problem (CPP) with constraints in the sizes of the cliques (Mitchell2007), as we will see in the next section. The CPP has been extensively studied in the literature. This graph optimization problem was introduced by GROTSCHEL

to formulate a clustering problem. They studied it from a polyhedral point of view and its theoretical results have been used in a cutting plane algorithm that includes heuristic separation routines for some classes of facet-defining inequalities.

Ferreira analyzed the problem of partitioning a graph satisfying capacity constraints on the sum of the node weights in each subset of the partition. Jaehn proposed a branch-and-bound algorithm where tighter bounds for each node in the search tree are reported. Additionally, a variant where constraints on the size of cliques are introduced for the CPP is studied by LABBE and a competitive branch-and-cut for a particular problem based on it have been implemented (LABBE2). Regarded to applications of the Graph Partitioning Problem, it is widely known that the canonical application of this problem is the distribution of work to processors of a parallel machine (HENDRICKSON_2000). Other well known applications include VLSI circuit design (VLSI_2011), Mobile Wireless Communications (Fairbrother_EtAl_2017) and the already mentioned sports team realignment (Mitchell_2003). For a complete survey of applications and recent advances in graph partitioning, see Buluc2016.

This paper proposes two integer formulations for the -way equipartition problem which are formally defined in Section 2. Moreover, valid inequalities for the polytopes of these formulations are derived in Section 3, which are integrated in a Branch & Bound & Cut scheme to solve to optimality instances of 54 teams and, in particular, a hard real-world instance of 44 teams not solved in a previous work. Additionally, a tabu search method for finding feasible high quality solutions is shown in Section 4. In Section 5, the tabu search method and the usage of valid inequalities are integrated in several strategies to solve the real-world as well as the simulated instances. Finally, Section 6 concludes the paper with some remarks.

2 Problem definition and integer programming formulations

By associating the venues of teams with the nodes of a graph, the distance between venues with costs on the edges, and football performance of the teams with weights on the nodes, a typical realignment problem can be modeled as a -way partitioning problem with constraints on the size (number of nodes in each subset differs at most in one) and weight of the cliques (total sum of node weights in the clique). From now on, we refer to this problem as a balanced -way partitioning problem with weight constraints (B-WWC).

Let be an undirected complete graph with node set , edge set , cost on the edges , weights on nodes and a fixed number . A -partition of is a collection of subgraphs of , where for all , for all , , and is the set of edges with end nodes in . Note that all subgraphs are cliques since is a complete graph. Moreover, let , , be the lower and upper bounds, respectively, for the weight of each clique (which is part of the input of our problem). The weight of a clique is the total sum of the node weights in the clique. Then, B-WWC consists of finding a -way partition such that


and the total edge cost over all cliques is minimized. In a previous work (ISCO2016), the NP-hardness of the B-WWC was proved by a polynomial transformation from the 3-Equitable Coloring Problem.

It is known that algorithms based on integer programming techniques are proved to be the best tools for dealing with problems such as B-WWC. As it was mentioned in the introduction, another CPP was successfully addressed by LABBE2, which used an integer programming formulation for the size-constrained clique partitioning problem given by LABBE

. In that formulation, a binary variable is defined for each edge. Let

be the variable associated to the edge . Then, if nodes and belong to the same clique, and otherwise. The formulation is stated as follows:


The objective function (3) seeks to minimize the total edge cost of the cliques. Constraints (4), (5), and (6) are the so-called triangle inequalities introduced by GROTSCHEL, which guarantee that if three nodes of are linked by two edges and , then the third edge must be also included in the solution, for all . Constraints (7) ensure that the cardinality of each clique is between values and , and constraints (8) determine that all variables are binary.

In our case, in order to model the B-WWC, the cardinality of any two cliques must differ at most in one, i.e., from now on, and . Moreover, additional constraints that impose the weight requirements on each clique are included:


As the cardinality of each subset in the partition depends on and , when divides , the formulation (3)-(9) returns cliques with exactly nodes and the problem corresponds to the -way equi-partition problem Mitchell2001. In this case, constraints (7) can be rewritten as:


When does not divide , the previous constraints are not enough to guarantee that integer solutions represent partitions of cliques. In fact, observe that formulation (3)-(9) may admit feasible solutions for different values of ; for example, consider as an instance of B-WWC a complete graph with and , which implies that and . However, a feasible solution for this instance could be a partition consisting of one subset of nodes with cardinality equal to and five subsets with cardinality equal to , which corresponds to a non desired value of .

Two alternatives are proposed to overcome this issue. On the one hand, dummy nodes are added to the graph until the condition is met, i.e. a set of dummy nodes of cardinality is defined. Moreover, for all , costs for , and weights are fixed. After that, must be updated with , and the same for . Finally, observe that two dummy nodes must not be assigned to the same clique in the partition. In order to avoid this impasse, the following constraint is considered:


We call to the formulation of B-WWC composed by the objective function (3) and constraints (4)-(6), (8)-(11).

On the other hand, an alternative to the inclusion of dummy nodes (as formulation states) is to consider a new constraint as follows. Note that in a balanced partition of a graph, there exists and disjoint subsets of cardinality and , respectively. That is, where . Note also that the total number of edges is . Let be the last number. It is easy to see that implies , and therefore the following equality forces the partition to have size :


We call to the formulation composed by the objective function (3) and constraints (4)-(9) and (12).

In some situations, the graph could be a non-complete one. For example, in real-world instances, there would be an extra requirement where certain pairwise of nodes must not participate in the same clique. This can be modeled by simply deletion of edges where and are those nodes that should be included in different cliques. Such a problem will be known as the generalized balanced -way clique partitioning with weight constraints (GB-WWC). The last problem is not harder to solve than B-WWC. In fact, given an instance of GB-WWC defined by an arbitrary graph , cost on the edges , weights on nodes , positive numbers and , and a fixed integer number , an instance of B-WWC can be constructed as follows: take a complete graph with set of nodes , weights , numbers , , , and distances

where is a big number. It is then obvious that GB-WWC has an optimum solution if and only if the optimum of B-WWC does not exceed . In practice, one does not have to deal with . As in the case for dummy nodes, considering a constraint is enough.

3 Valid inequalities

Let be the polytope defined by the convex hull of integer solutions of (if ) or (otherwise). is uniquely determined by the parameters and for all . If the weight constraints are redundant, the dimension of this polytope is given by Theorem 3.1 of LABBE and the equations (10) or (12) are enough to describe the minimal system of . On the other hand, the weight constraints could make this polytope to be empty. In order to avoid these cases, a necessary condition on weights is established in the following result:

Lemma 1

A necessary condition for the feasibility of B-WWC is


Observe that by assumption and . On the other hand, from constraints (9),

from which the result follows.

Observe that is contained in the one given by LABBE. Hence, linear relaxations of our formulations can be strengthened by means of known classes of valid and facet-defining inequalities described in previous works. This is the case of the 2-partition inequalities:


for every two disjoint nonempty subsets of such that and . These inequalities were introduced in the nineties by GROTSCHEL for the Clique Partitioning Polytope and, in recent years, LABBE explored these inequalities for their polytope. Based on the computational experiments reported in these preceding works, the usage of 2-partition inequalities as cuts evidenced a good behavior and effectiveness in solving the IP model, and they will be considered in the present paper.

In addition, new valid equations and inequalities arise by the introduction of weight constraints (9). Below, three families of valid inequalities for are proposed. For any , define . Also, for a given -partition define . That is, contains all the edges of with end nodes in . Finally, any integer solution lying in a polyhedron is called a root of .

Proposition 1

Let such that . Then, the following -Weight-Cover inequality is valid for :


Let be a root of representing a -partition . The left side of the inequality is . Since , all nodes in do not belong to the same clique in . That is, has nodes from two or more cliques in and the largest value of is reached when has nodes belonging to a clique, say , and just one node belonging to another one, say . In that case, and . Therefore, .

Corollary 1

Let such that . Then, is a valid equation of .

Proposition 2

Let such that , and . Then, for all , the following -Weight-Lowerbound inequality is valid for :


Let be a root of representing a -partition and w.l.o.g. suppose that . The left side of the inequality is . If , we have since , and the inequality is valid. If then .

Proposition 3

Let such that , and . Then, for all , the following -Weight-Upperbound inequality is valid for :


Let be a root of representing a -partition and w.l.o.g. suppose that . The left side of the inequality is . If , we have and the inequality is valid. If , then .

The previous results just give conditions for the inequalities to be valid but we are also interested in finding those inequalities that define faces of high dimension, preferably facets of , since one can reinforce linear relaxations with them. This fact would require a deeper polyhedral study of . However, for practical purposes, it is enough to propose necessary conditions that guarantee that faces defined by such inequalities have as many roots as possible (or at least be non-empty). These conditions can be further used for the proper separation of the inequalities involved.

Proposition 4

Let be the face defined by a -Weight-Cover inequality. If , then there exists such that .


Let be a root of representing a -partition . Since , the partition restricted to must have two components: a clique of size and an isolated node . Denote . Clearly, . Now, suppose that . Hence, the -Weight-Cover inequality is also valid and we obtain which is absurd.

The previous result suggests that, in order to obtain a good Weight-Cover inequality, we should impose that for all (i.e.  is “minimal” with respect to the condition ).

Proposition 5

Let be the face defined by a -Weight-Lowerbound inequality. If , then .


Let be a root of representing a -partition and suppose that . Also, let . We have . Therefore,

Since , we have . Therefore, .

This result simply suggests to consider only Weight-Lowerbound inequalities such that .

Regarding the Weight-Upperbound inequalities, and for the sake of clarity, the roots of the faces defined by such inequalities are classified in two types. Let

be the face defined by a -Weight-Upperbound inequality, let be a root of representing a -partition where w.l.o.g. . If , we say that is of Type 1. Otherwise, is of Type 2. Now, define as the set of roots of of Type where .

Proposition 6

Let be the face defined by a -Weight-Upperbound inequality.
(i) If , then .
(ii) If , then .


We recall that .
(i) If , we obtain implying that . Therefore, .
(ii) If , add to both sides of the equation. We obtain . Since , and, therefore, . On the other hand, implying that . Hence, , and . Let be the unique element from . Then, . Therefore, .

This result suggests to discard those Weight-Upperbound inequalities such that the condition does not hold.

4 Deriving upper bounds: a tabu search

Consider an optimization problem where, given a graph and a number , the objective is to obtain a partition of the set of nodes such that for all , and to minimize the number of edges in . This problem, called -ECP, is iteratively used by the state-of-the-art tabu search algorithm TabuEqCol for solving the Equitable Coloring Problem (TABUEQCOL; TABUIMPROVED).

The -ECP is very related to the problem presented in this paper. In fact, it is a particular case of B-WWC: simply consider a complete graph , , for all , for all and for all . In this section, we propose a two-phase algorithm based on TabuEqCol for solving B-WWC which incorporates an additional mechanism to deal with weights.

Tabu search is a metaheuristic method proposed by GLOVER. Basically, it is a local search algorithm which is equipped with additional mechanisms that prevent from visiting a solution twice and getting stuck in a local optimum. The design of a tabu search algorithm involves to define the search space of feasible solutions, an objective function, the neighborhood of each solution, the stopping criterion, the aspiration criterion, the features and how to choose one of them to be stored in the tabu list and how to compute the tabu tenure. The reader is referred to the work by TABUEQCOL for the definitions of these concepts and how to denote them.

Below, the details of our algorithm are presented:

  • Search space of solutions. A solution is a partition of the set of nodes such that for all . For the sake of efficiency, solutions are stored in memory as tuples where , and .

  • Objective function. For a given solution , let be the sum of the distances of every edge in for all and . The objective function is defined as where is a big value. Note that solutions satisfying are feasible but penalized in .

  • Stopping criterion. The algorithm stops when a maximum number of iterations is reached.

  • Aspiration criterion. Let be the current solution and be the best known solution so far. When , replaces .

  • Set of features. A solution presents a feature if and only if .

  • Initial solution. For all , do .

  • Neighborhood of a solution. For a given solution , , , a neighbor , , of is generated with two schemes:

    • 1-move (only applicable when does not divide ). For a given such that and a given , consider such that node is moved from to . Formally, , , , , and for all , and .

    • 2-exchange. For a given and such that , consider such that is moved to and is moved to . Formally, , , , , and for all , and .

  • Selection of a feature to add in the tabu list. Once a movement from to is performed, is stored on the tabu list.

  • Tabu tenure. Once an element is added to the tabu list, it remains there for the next iterations, where

    is an integer randomly generated with a uniform distribution (one of the criteria used by


Since one pretends the algorithm to be as fast as possible, the value of should be computed by adding or subtracting the corresponding difference to . Also, should not be too high in order to avoid round-off errors. In our case, was set to .

A difference between TabuEqCol and our algorithm is that we are interested in feasible solutions for the B-WCC but TabuEqCol does not contemplate weight constraints. For that reason, the entire process is divided in two stages. The first one consists of searching a solution with while the second one is focused on minimizing . We observed that, during the first stage, the search needs to be more diversified. Therefore, different range of values for the tabu tenure in each stage are used. If (1st. stage) then and if (2nd. stage) then .

This method can also be used for obtaining feasible solutions of GB-WWC: simply consider for those edges ; if then is a feasible solution of GB-WWC. However, if the density of edges in the graph is not high, it would be convenient to exploit the structure of in the computation of the neighborhood of a solution and tabu tenure, as in the case of TabuEqCol.

5 Computational experiments

In this section, some computational experiments are presented. They consist of comparing different ways of solving the B-WWC, called Strategies and denoted by . Comparisons are carried out over random instances and, at the end, the resolution of a real-world instance is addressed. The improvement in the results are shown incrementally. That is, each strategy outperforms the previous one. All the experiments were carried out on a machine equipped with an Intel i5 2.67GHz, 4Gb of memory, the operating system Ubuntu 16.4 and the solver GuRoBi 6.5. All instances as well as the source code of the implementation can be downloaded from:

Random instances were generated by computing the coordinates of points with an uniform distribution in the domain . Then, for every pair of points , , is assigned the euclidean distance between both points. Weights are random values generated with a uniform distribution in the range , and , where and

are the average and standard deviation of the weights of all points. Combinations of

are were chosen so that does not divide , similar to those values of real instances.

Results of the experiments are summarized in Tables 2 to 6, whose format are as follows: first and second columns display the number of the instance and its optimal value, and remaining columns show the number of nodes evaluated and the time elapsed in seconds of the optimization. A time limit of one hour is imposed. For those instances that are not solved within this limit, the gap percentage between the best lower and upper bound found is reported. Last row displays the average over all instances, except for Tables 4 and 6 (marked with a dagger) where it shows the average over those instances solved by all strategies being compared (i.e. 21, 23, 24, 26, 28, 29 and 30 for Table 4; 42, 43, 45, 47, 49 and 50 for Table 6). Boldface indicates the best results.

(with dummy nodes) vs. . Strategy 1 () consists of the resolution of , after addition of dummy nodes, while is the direct resolution of . Both use GuRoBi at its default settings. Instead of using a single constraint, such as (11), variables are directly fixed to zero in a straightforward manner. In fact, there are three cases where are set to zero:

  • (only when dummy nodes are present, see constraint (11)).

  • (only when is not complete).

  • (see Corollary 1).

Note that, according to the results reported in the tables, performs better than for larger instances. In particular, solves instance 27 to optimality and reports better gaps than for instances 22 and 25.

Tabu search vs. GuRoBi built-in heuristics.

The next step is to use the tabu search method proposed in the previous section. This metaheuristic generates good initial solutions. In particular, it gives the optimal one for almost all instances given in the tables in a reasonable amount of iterations (the unique exception was instance 19 where it could not reach the optimal solution after 1000000 iterations, for two different seeds). Based on experimentation with several random instances and initial seeds, we obtained a formula by linear regression for the limit in the number of iterations needed:

Now, in , iterations of the tabu search are executed and the best solution found is provided as an initial integer solution to GuRoBi. Then, is solved with GuRoBi primal heuristics turned off (Heuristics, PumpPasses and RINS parameters are set to zero). In order to make a fair comparison, time reported in tables includes time spent by tabu search. Clearly, outperforms . In particular, was able to solve instance 25 by optimality and presents a lower gap than for instance 22.

Addition of triangular inequalities on demand. Formulation (and also ) has a number of constraints due to triangular inequalities (precisely, ). Although in presence of variables set to zero some of them become redundant and one can omit them when performing the initial relaxation (e.g. for a given , if then inequalities (4) and (5) are redundant but (6) is not), its number is still high. Since the number of variables is it is expected that several triangular inequalities are not active in the fractional solution of the initial relaxation. We noticed that there exists a relationship between being active and the distances of positive variables in its support: the lower the value is

, the higher the probability of the inequality

is active. The following experiment reveals this relationship.

Let be the set of triangular inequalities and let be a value assigned to each as follows:

We first order all triangular inequalities according to the value from lowest to highest and we make an equitable partition of the set in 10 deciles. That is, and for all where for and . Then, we solve without these inequalities in its initial relaxation and, whenever GuRobi obtains a solution (fractional or integer) violating some of them, they are added to the current relaxation: if the solution is fractional, they are added as cuts and if the solution is integer, they are added as lazy constraints. Histograms with the averages (over 10 instances of 44 nodes, each instance having

) of percentages of triangular inequalities generated per decile

are shown in Figures 2 and 3. In the former, only those inequalities added at root node of tree are considered. In Figure 3, all inequalities are counted. In particular, if the same inequality is generated in two different nodes, it is counted twice.

Note that, at root node, most of the inequalities from and are generated. In addition, during the B&B process, inequalities from , and are rarely violated. Based on these observations and additional experimentation, we define the strategy as with the following differences:

  • Only inequalities from and are considered in the initial relaxation.

  • Each time an integer solution is found, inequalities from with are checked and violated ones are added as lazy constraints.

  • Each time a fractional solution is found, inequalities from with are checked and those that are violated by at least 0.1 units, are added as cuts. If the current node is root, , otherwise .

  • Some GuRoBi cuts are disabled (Clique, FlowCover, FlowPath, Implied, MIPSep, Network and ModK).

Observe that behaves much better than . This strategy needs less than the half of time used by the preceding one. In addition, it can solve instance 22 to optimality.

Separation of valid inequalities. Here, we experiment with additional custom separation routines embedded in our code, where 2-partition inequalities and the new families of valid inequalities presented in Section 3 are considered. Two experiments are carried out, detailed below.

Experiment 1: In this experiment, we analyze the effectiveness in terms of reduction in the number of B&B nodes when Weight-Cover, Weight-Lowerbound and Weight-Upperbound inequalities are used. We also gather helpful information that is further used for the design of the separation routines. For each family, random instances of 44 nodes are solved and, during the optimization, the inequalities satisfying the conditions given in Section 3 are enumerated exhaustively and added when they are violated by an amount of at least of the r.h.s.
Regarding Weight-Cover inequalities, we restrict the enumeration to since for or they are seldom violated. Regarding Weight-Upperbound inequalities, we impose an additional limit of 1500 nodes since its enumeration in each node consumes a fairly long time. This limit is reached on instances 22, 25 and 27.
Table 1 reports the total number of cuts generated and the number of B&B nodes evaluated for each family of valid inequalities, and the relative gap when 1500 nodes are reached (only for Weight-Upperbound). The three columns entitled “only GuRoBi” display the same parameters (number of cuts, B&B nodes and relative gap at 1500 nodes) generated by strategy . The last three rows show the averages over all instances, the averages over instances 21, 23, 24, 26, 28, 29 and 30 (marked with a dagger), and the average of gap over instances 22, 25 and 27 (marked with a double dagger).
We conclude that the addition of Weight-Cover and Weight-Upperbound cuts make a substantial reduction in the number of nodes, whereas Weight-Lowerbound cuts occur less frequently and the reduction in the number of nodes is marginal. We also noticed that violated Weight-Cover inequalities are usually composed of nodes with high values of the expression , where is the current fractional solution. However, violated Weight-Upperbound inequalities do not seem to have an obvious structure that can be exploited. We only consider Weight-Cover inequalities in the next experiment.

Experiment 2: The goal is to compare the performance of when different combinations of separation routines are used. For each combination, random instances of 48 and 54 nodes are solved (see Tables 5 and 6). The execution of these routines is performed only when no triangular inequalities were generated for the current fractional solution, denoted by . In particular, the separation of 2-partition inequalities is based on the procedure given by GROTSCHEL and LABBE2.

  • 2-partition. The following procedure is repeated for each . First, compute . If , then stop. Otherwise, set and repeat the following 5 times, whenever possible. Pick two random nodes from and set . Keep picking nodes from such that and add to until no more nodes are found. Then, check if the 2-partition inequality (13) with and is violated and, in that case, add it as a cut. Finally, make (even when the inequality is not violated). The set (of “forbidden nodes”) prevents from generating cuts with similar support.

  • Weight-Cover. First, order nodes according to the value from highest to lowest, i.e. such that for all . For each do the following. Consider every composed of nodes from and 2 more nodes from (note that and the procedure explores combinations). If and for all , check the -Weight-Cover inequality. If it is violated, add it as a cut.

In the given procedures, an inequality is considered violated if the amount of violation is at least . As one can see from the tables, 2-partition together with Weight-Cover cuts is the best choice. We define the strategy as with both separation routines enabled.

Resolution of a real-world instance. As mentioned in the introduction, in the zonal stage of the second category of the Ecuadorian football league, a championship including the two best teams of each provincial associations is played. The set of teams must be partitioned in 8 groups to play a Round Robin Tournament in each one of them. A regulation imposes that two teams of the same provincial association must not belong to the same group. During the 2015 season, teams (22 provincial associations) participated in the tournament, and they were divided in groups of teams and groups of teams.

The nodes of a graph are associated with the venues of the teams. We denote the nodes by and , corresponding to the venues of the best two teams of each provincial association , for all , and edge is included if and only if the teams associated to nodes could potencially belong to the same group. In order to satisfy the regulation mentioned before, edges of the form do not appear in the graph. Thus, our realignment instance consists of teams which must be partitioned in groups, and the graph is a particular complement of a matching, i.e.  and .

For solving our instance, we made a preliminary test of our two best strategies (i.e.  vs. ). A time limit of one hour was imposed to them. None of them was able to solve the instance within this limit, but the relative gap reported was 3.22% for against 2.36% for . Hence, was chosen for solving the instance without time limit. Below, we resume some highlights about the optimization:

  • Instance: , , and .

  • Iterations performed/time spent by tabu search: 113352 iterations (6.8 sec.).

  • Variables and constraints of the initial relaxation: 913 vars., 7444 constr.

  • Cuts generated: Gomory (18), Cover (65), MIR (1620), GUB cover (98), Zero-half (1278), triangular (1935), 2-partition (3965), Weight-Cover (14).

  • Nodes explored and total time elapsed: 34573 (68374 sec.).

  • Optimal value: 21523 km, found by tabu search at iter. 3549 (0.21 sec.).

  • Gap evolution: 2.36% after 1 hour, 1.08% after 4 hours.

Since a Double Round Robin Tournament is considered, the total distance is 86092 km. In contrast, the best solution found in our previous work (ISCO2016) had 86192 km and the gap reported was 12.6% after 4 hours of execution.

6 Conclusion and future work

In this paper, a balanced -way partitioning problem with weight constraints is defined. The problem consists in partitioning a complete or a general graph in a fixed number of subsets of nodes such that their cardinality differs at most in one and the total weight of each subset is bounded. The objective aims to minimize the total cost of edges with end-nodes in the same subset. The problem was formulated as an integer program and several strategies based on exact and methaheuristic methods are evaluated. The motivation to state, formulate and solve this problem was the realignment of the second category of the Ecuadorian football league. The solution of this case of study is based on real world data and the methodology may be suitable applied to the realignment of other sports leagues with similar characteristics.

In order to solve the problem, one of the key decisions was to use a modification of a state-of-the-art tabu search for obtaining good feasible solutions. In particular, our algorithm found the optimal solution of the real-world instance in less than a second, whereas the previous approach given by ISCO2016 was unable to obtain it within 4 hours of CPU time. Another key decision was to include a portion of triangular inequalities (20% of them under a specific ordering) in the initial relaxation and manage the remaining ones as cuts or lazy constraints. These facts, in conjunction with other minor decisions, allowed to solve comfortably random instances of 48 nodes in less than half an hour and the real-world one in 19 hours (here, almost all the time was spent on certifying the optimality).

In addition, two formulations are presented and one of them () is chosen based on experimentation. A possible cause of the poor performance of could be the existence of symmetrical solutions due to the indistinguishability of dummy nodes. For example, an addition of 4 dummy nodes implies that, for each integer solution of , there are 24 symmetrical integer solutions in .

We also proposed three families of valid inequalities and two of them have proven to be very effective for reducing the number of B&B nodes, when they are used as cuts. One of them (Weight-Cover) in conjunction with the well-known 2-partition inequalities, allows to shorten the CPU time in 51% for (see Table 5) and 56% for (see Table 6). Moreover, it is able to solve one more instance (48) and the gap reported for those instances not solved in one hour (41, 44, 46) is smaller. As other state-of-the-art exact algorithms for the -way graph partitioning problem Fairbrother_EtAl_2017; Anjos2013, the best strategies provided here attain optimal solutions for graphs that have around 50 nodes.

A future work could be to include a separation routine of Weight-Upperbound inequalities and to explore other valid inequalities (for example, by adapting those ones presented by LABBE2). Finally, at the theoretical level, it could be useful to make a polyhedral study of , the convex hull of integer solutions of , and to propose facet-defining inequalities that can be used as cuts.

This research was partially supported by the 15-MathAmSud-06 “PACK-COVER: Packing and covering, structural aspects” trilateral cooperation project.