Representations of quadratic combinatorial optimization problems: A case study using the quadratic set covering problem

02/03/2018 ∙ by Abraham P. Punnen, et al. ∙ Simon Fraser University 0

The objective function of a quadratic combinatorial optimization problem (QCOP) can be represented by two data points, a quadratic cost matrix Q and a linear cost vector c. Different, but equivalent, representations of the pair (Q, c) for the same QCOP are well known in literature. Research papers often state that without loss of generality we assume Q is symmetric, or upper-triangular or positive semidefinite, etc. These representations however have inherently different properties. Popular general purpose 0-1 QCOP solvers such as GUROBI and CPLEX do not suggest a preferred representation of Q and c. Our experimental analysis discloses that GUROBI prefers the upper triangular representation of the matrix Q while CPLEX prefers the symmetric representation in a statistically significant manner. Equivalent representations, although preserve optimality, they could alter the corresponding lower bound values obtained by various lower bounding schemes. For the natural lower bound of a QCOP, symmetric representation produced tighter bounds, in general. Effect of equivalent representations when CPLEX and GUROBI run in a heuristic mode are also explored. Further, we review various equivalent representations of a QCOP from the literature that have theoretical basis to be viewed as strong and provide new theoretical insights for generating such equivalent representations making use of constant value property and diagonalization (linearization) of QCOP instances.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Let be a finite set and be a family of subsets of . For each , a cost is prescribed. Further, for each , a cost is also prescribed. Note that any can be represented by its incidence vector where if and only if . Thus can be represented as . Let be the matrix such that its element is and . Then, the quadratic combinatorial optimization problem (QCOP) is to

Minimize
Subject to

or equivalently

Minimize
Subject to

The well known quadratic assignment problem [15, 37], the quadratic unconstrained binary optimization problems (QUBO) [36], and the quadratic knapsack problem [42] are special cases of QCOP. Other examples of QCOP include the quadratic travelling salesman problem [21, 39, 45], the quadratic shortest path problem [30, 46], the general quadratic programming problem [7, 8, 9, 10, 23, 43, 44, 47], the quadratic spanning tree problem [3, 18], quadratic set covering problem [20, 29, 40], 0-1 bilinear programs [19, 26], and combinatorial optimization problems with interaction costs [38].

When the elements of

are represented by a collection of linear constraints in binary variables, QCOP can be solved using general purpose binary quadratic programming solvers such as CPLEX

[32] or GUROBI [27]. The matrix associated with a QCOP can be represented in many different but equivalent forms using appropriate transformations on and . For example, it is possible to force to have properties such as is positive semidefinite [28], negative semidefinite [28], symmetric with diagonal entries zero [28], upper (lower) triangular [23] etc., so that the resulting problem is equivalent to the given QCOP. Many authors use one of these equivalent representations to define a QCOP. This raises the question: “Which representation of is better from a computational point of view?” The answer to this question depends on how one defines a ‘better representation’. Extending the work of Hammer and Rubin [28], Billionnet et al [8] used diagonal perturbations in an ’optimal’ way to create strong reformulations of QUBO. Billionnet [7], Billionnet et al [9, 10], and Pörn et al. [44] extended this further to include perturbations involving non-diagonal elements by making use of linear equality constraints, if any, associated with a QCOP. These reformulations force to be symmetric and positive semidefinite yielding strong continuous relaxation. Galli and Letchford [23] obtained strong reformulations using quadratic constraints of equality type. Although all these representations are very interesting in terms of obtaining strong lower bounds at the root node of a branch-and-bound search tree, they require additional computational effort that are not readily available within general purpose solvers such as CPLEX or GUROBI. To the best of our knowledge, neither CPLEX nor GUROBI makes a recommendation regarding a simple and specific representation of the matrix that is normally more effective for their respective solver.

It is not difficult to construct examples where one representation works well for CPLEX while the same representation do not work well for GUROBI and vice versa. For example, GUROBI solved a quadratic set covering instance involving 511 constraints and 210 variables in 4933 milliseconds on a PC with windows 7 operating system, Intel 4790 i7 3.60 GHz processor and 32 GB RAM. The same problem, when represented in an equivalent form with symmetry forced, GROBI could not solve in 3 hours. CPLEX solved the problem in 23674 milliseconds and for an equivalent representation with symmetry forced, it solved in 21588 milliseconds. For another class of problems, GUROBI solved random non-diagonal reformulations efficiently, while structured equivalent formulations where having properties such as symmetry, triangularity, positive semidefiniteness or negative semidefiniteness, could not solve many of the problems in this class (see Table 4 and Table 5). CPLEX however solved all these reformulations, although the time taken was larger than that of GUROBI for random perturbations.

We also could not find anything in the literature regarding a preferred representation of for solving QCOP established through systematic experimental analysis. Motivated by this, we investigate on the representation of the matrix for a QCOP. Unlike the way interesting theoretical works reported in  [7, 8, 9, 10, 23, 44], we are not attempting to develop optimal representation based on some desirability criteria. Our experimental results in Table 4 and Table 5 substantiate the merit of investigating this line of reasoning as well. Consequently, we present various transformations that provide equivalent representations of the problem, not necessarily ’optimal’ ones. From these representations, we identify six simple and well known classes that are compared using CPLEX and GUROBI. The experimental study discloses that CPLEX prefers symmetric or symmetric with a diagonal perturbation yields a positive definite matrix, whereas GUROBI prefers an upper triangular

matrix. Although there are outliers, statistical significance of these observations are established through Wilcoxin test

[48]. We also propose ways to construct strong reformulations making use of constant value property [16, 17] associated with linear combinatorial optimization problems and the concept of diagonalizable (linearizable) cost matrices associated with a QCOP [18, 30, 35, 45].

Equivalent representation of the data could also influence lower bound calculations for a QCOP. To demonstrate its impact, we used a generalization of the well know Gilmore-Lawler lower bound [24, 37] and its variations [3]. Our experiments show that for most of the test problems we used, the strongest lower bound was obtained when used an equivalent representation where is forced to be symmetric, except for one class of test problems for which the upper triangular structure produced tighter bound.

To conduct experimental analysis, we selected the quadratic set covering problem (QSCP). The QSCP model have applications in the wireless local area planning and the problem of locating access points to guarantee full coverage [2]. Other application areas of QSCP include logical analysis of data [29], medicine [13, 20], facility layout problems [5], line planning in public transports [14, 33] etc. Another motivation for selecting the QSCP as our test case is that relatively fewer computational studies available for this model. Thus, this work also contributes to experimental analysis of exact and heuristic algorithms for the QSCP.

The paper is organized as follows. In section 2, we discuss various equivalent representations of the QCOP. Some of these representations are generated using diagonalizable (linearizable) quadratic cost matrices and linear cost vectors satisfying constant value property. Characterization of diagonalizable cost matrices for the general QCOP and for a restricted version where feasible solutions have the same cardinality are also given. We also present a natural lower bound for QCOP, that is valid under the equivalent transformations. Section 3 discusses details of the experimental platform, generation of test data, and experimental results on QSCP using CPLEX12.5 and GUROBI6.0.5 comparing selected equivalent representations for exact and heuristic solutions. Experimental analysis using the natural lower bound is also given in this section followed by concluding remarks in Section 4.

Throughout the paper, we use the following notations. For a given , a QCOP can be represented by . The matrix is called the quadratic cost matrix and the vector is called the linear cost vector. For an instance of a QCOP and an , . is the continuous relaxation of . i.e. is obtained by replacing the constraints in the definition of by . For we denote . For any matrix , is the diagonal matrix of same size as with its th element is and represents the vector . For any vector , is the diagonal matrix with its ()th element is . All matrices are represented using capital letters and all elements of the matrix are represented by corresponding double subscripted lower-case letters where the subscripts denoting row and column indices. Vectors in are represented by bold lower-case letters. The th component of vector is , of vector , is etc. The transpose of a matrix is represented by . The vector space of all real valued matrices with standard matrix addition and scalar multiplication is denoted by .

2 Equivalent representations

Let be an instance of a QCOP. Then, is an equivalent representation of if for all . The following remark is well known.

Remark 2.1.

is an equivalent representation of .

Theorem 2.2.

If , are equivalent representations of an instance of a QCOP then is also an equivalent representation of whenever .

Proof.

Let and . Then

From Remark 2.1 and Theorem 2.2 we have the following well-known corollary.

Corollary 2.3.

is an equivalent representation of .

We call the equivalent representation given in Corollary 2.3 the symmetrization. This representation is well known and used extensively in literature. Since is a symmetric matrix, it is sometimes viewed as a desirable representation. However, symmetrization could also result in a matrix with increased or decreased rank. Thus the equivalent representation obtained by symmetrization could have properties different from those of the original representation and this could impact the computational performance of different algorithms. Note that for all . Thus, the symmetrization operation also preserves the objective function value of the continuous relation of a QCOP. This property no longer holds for some other equivalent representations discussed later.

If one or more elements of , say is perturbed by and adjusting this by subtracting from , we immediately get an equivalent representation of . Equivalent representations obtained this way have the structure of discussed in the theorem below.

Theorem 2.4.

If

is a skew-symmetric matrix,

is a diagonal matrix, , and , then is an equivalent representation of .

Proof.

Since and for all it follows that . ∎

In the above theorem, if is symmetric and if we want also to be symmetric, then

must be the zero matrix. In this case, we can choose

for sufficiently large to make a symmetric positive semidefinite matrix and hence becomes convex. Hammer and Ruben [28] suggested using

as the negative of the smallest eigenvalue of

. Billionnet et al [8] proposed an ’optimal’ choice of the matrix in the case of quadratic unconstrained binary optimization (QUBO) problems. Their selection of is ’optimal’ in the sense that the resulting optimal objective function value of the continuous relaxation is as large as possible yielding tight lower bounds. This method extends to QCOP with appropriate restriction on the representation of .

A quadratic cost matrix associated with a QCOP is said to be diagonalizable with respect to if there exists a diagonal matrix such that for all . The matrix is called a diagonalization of with respect to . Here after the terminology ‘diagonalizable” (“diagonalization”) means diagonalizable (diagonalization) with respect to the underlying families . Recall that for all and hence , where is a vector of size with its element as the diagonal entry of . Diagonalizable matrices form a subspace of the vector space of all real valued matrices. The concept of diagonalization indicated here is closely related to the linearization of some quadratic combinatorial optimization problems discussed in [18, 30, 35, 45] and for the case of binary variables, these two notions are the same. Since the terminology “linearization” is also used in another context in the case of QCOP [1, 47], we preferred to use the more natural and intuitive terminology diagonalization. Note that if is diagonalizable then and are also diagonalizable. Also, any skew-symmetric matrix is diagonalizable and a zero matrix of the same dimension is diagonalization of a skew symmetric matrix.

Theorem 2.5.

is an equivalent representation of the QCOP instance , where

is any diagonalizable matrix associated with the QCOP and

is a diagonalization of .

Proof.

Since is diagonalizable, for all . Thus,

Corollary 2.6.

If are diagonalizable matrices associated with a QCOP and are scalars, then is an equivalent representation of the QCOP instance where is a diagonalization of for .

We can strengthen the equivalent representation given in Corollary 2.6 using a result by Galli and Letchford [23]. Since are diagonalizable with respective diagonalizations , our QCOP satisfies the constraints

(1)

Since is diagonalizable with diagonalization , is a symmetric diagonalizable matrix with as its diagonalization. Thus we can assume that in equation (1) is symmetric for all . Thus, we can apply the quadratic convex reformulation (QCR) technique discussed in [23] to yield a strong reformulation making the resulting equivalent formulation have a continuous relaxation which is convex. Note that symmetric diagonalizable matrices form a subspace of the vector space . We can use discussed above as a basis of this subspace and applying QCR reformulation  [23] to yield stronger equivalent representations. A recent related work is by Hu and Sotirov [31], that used diagonability to obtain strong lower bound for the quadratic shortest path problem on acyclic digraphs.

To generate equivalent representations of a QCOP using Theorem 2.5, Corollary 2.6 or by the QCR method [23] as discussed above, we need to identify associated diagonalizable matrices. Characterization of diagonalizable quadratic cost matrices associated with a QCOP has been studied by different authors for specific cases, exploiting the underlying structure of . This include quadratic assignment problems [35], special quadratic shortest path problems [30], the quadratic spanning tree problem [18], and the quadratic traveling salesman problem [45]. However, for the general QCOP without restricting the structure of , the characterization of diagonalizable quadratic cost matrices do not yield rich classes like what was indicated for the special problems mentioned above. This is because QUBO is a special case of QCOP where any subset of is feasible.

Theorem 2.7.

A quadratic cost matrix associated with a QCOP is diagonalizable if and only if where is a skew-symmetric matrix and is a diagonal matrix.

Proof.

Since is skew-symmetric, for any and hence is diagonalizable. Further, the diagonalization of such a is . To prove the converse, it is enough to show that for the quadratic unconstrained binary optimization problem (QUBO), if is diagonalizable, then must be of the form . Note that the family of feasible solutions for QUBO is . First, we prove that for QUBO, if a quadratic cost matrix is symmetric with diagonal entries zero is diagonalizable, then must be the zero matrix. Suppose that is not true. Let the element . Then by symmetry . Now consider the solution

Let be a diagonalization of . Then which implies . Since is arbitrary, must be a zero matrix. Now consider the solution

Then . Since is a diagonalizable, which implies , a contradiction. Thus for any symmetric cost matrix with diagonal entries zero of a QCOP, if is diagonalizable then must be zero. Now take any cost matrix of a QCOP that is diagonalizable. Let Then is diagonalizable and hence is diagonalizable. But is symmetric with diagonal entries zero. Then must be a zero matrix and hence . Thus is skew symmetric. But and the result follows. ∎

Note that Theorem 2.4 follows on a corollary of Theorems 2.5 and 2.7.

As observed earlier, imposing additional restrictions on the family of feasible solutions, more interesting characterizations for diagonalizability can be obtained [18, 30, 35, 45]. Let us now add a simple restriction that all elements of the underlying have the same cardinality. The resulting QCOP is called the cardinality constrained quadratic combinatorial optimization problem (QCOP-CC).

A matrix is said to be a weak-sum matrix [16] if there exists vectors such that for . Here and are called the generator vectors of . Note that the sum of a weak-sum matrix and a diagonal matrix is a weak-sum matrix. For QCOP-CC we have the following characterization for diagonalizability.

Theorem 2.8.

A quadratic cost matrix associated with a QCOP-CC is diagonalizable if and only if where is a weak-sum matrix and is a skew-symmetric matrix.

Proof.

Let be the cardinality of elements in the underlying defining the QCOP-CC instances. Suppose where is a weak-sum matrix and is a skew-symmetric matrix. Then

where is a diagonal matrix with . Thus is diagonalizable.

Conversely, suppose is diagonalizable. We will show that is of the required form given in the theorem. To establish this necessary condition, it is enough to establish it for a special case of QCOP(K). So, consider the quadratic minimum spanning tree problem (QMST) on a complete graph. Custic and Punnen [18] showed that a symmetric quadratic cost matrix associated with a QMST is diagonalizable if and only if it is a weak-sum matrix. Consider a quadratic cost matrix for the QMST. Now, is diagonalizable if and only if is diagonalizable. Since is symmetric, it follows from [18] that is a weak-sum matrix. But . Since is skew-symmetric, the result follows. ∎

Corollary 2.9.

Let be a skew-symmetric matrix, be a diagonal matrix, and be a weak-sum matrix with generator vectors and . If , and then is an equivalent representation of for QCOP-CC, where is the fixed cardinality of elements of defining the QCOP-CC instances.

A cost vector of a linear combinatorial optimization problem (LCOP) satisfies constant value property (CVP) [16] if for all and some constant . Characterization cost matrices associated with a linear combinatorial optimization problem satisfying CVP has been studied extensively in literature for various special cases. This include the travelling salesman problem [11, 12, 22, 34], assignment problem [22], spanning tree problem [16], shortest path problem [16], multidimensional assignment problem [16, 17] etc. Consider a QCOP with family of feasible solutions . Suppose each of the vectors satisfies CVP with respect to . Then for all , where are some constants. Using these natural equalities, we can apply the QCR method of Billionnet et al [7, 10] or Pörn et al [44] to generate strong reformulations of QCOP with appropriate restrictions on . It may be noted that the vectors satisfying CVP for an LCOP form a subspace of . Thus, in particular, if we choose as a basis for this subspace, strong reformulations can be obtained using the QCR method of Billionnet et al [10] or of Pörn et al [44].

Vectors satisfying CVP can also be used to generate diangonalizable matrices in a natural way which in turn can be used to generate strong reformulations as discussed earlier. To see this, let be a collection of cost vectors (not necessarily distinct) satisfying CVP with respect to with respective constant values . Let be the matrix with its th row is . Then is diagonalizable and with is its diagonalization. Diagonalizable matrices generated this way can be used to obtain equivalent representations as discussed in Theorem 2.5. Let us now observe that vectors satisfying CVP can also be used to obtain Billionnet et al [10] type equivalent representations for QCOP.

Suppose satisfies CVP for the family of feasible solutions of a QCOP with as the constant value. Then for all . Create copies of this equation and multiply both sides of the th equation by , for where is a scalar. Adding these equations give . This can be written as , where .

Theorem 2.10.

Let . Then is an equivalent representation of where is any diagonal matrix.

The proof of the theorem follows from the previous discussions. As a corollary, we have

Corollary 2.11.

Let be vectors satisfying CVP for solutions in with respective constant values and are vectors in . Let for Then is an equivalent representation of .

It can be verified that the equivalent representations given by Corollary 2.11 are precisely of Billionnet et al [10] type. Following the ideas of Billionnet et al [10], when is symmetric, the best values of and (in terms of strong continuous relaxations) can be identified by solving an appropriate semidefinite program and its dual with suitable assumptions on the representation of . Further, by choosing as a basis of the subspace of obtained by vectors satisfy CVP, strong Billionnet et al [10] type representation can be obtained.

Let us now examine some simple equivalent representations generated by various choices of and in Theorem 2.4 along with associated properties. Many of these representations are well known in the context of various special cases of QCOP. We summarize them below with some elucidating remarks.

  1. Diagonal annihilation: In Theorem 2.4, choose such that for , and as the zero matrix. Then the diagonal elements of the resulting are zeros. We call this operation of constructing the equivalent representation from as diagonal annihilation.

    Although diagonal annihilation is a simple operation, some important properties of and could be very different. For example, the difference between the rank of these matrices could be arbitrarily large. One matrix could be positive semidefinite while the other could be negative semidefinite or indefinite. The symmetry property, if exists, is preserved under this transformation. If for all then it can be verified that . Similarly, If for all then .

    Since some of the crucial properties of could be altered by diagonal annihilation, the computational impact of this transformation needs to be analyzed carefully.

  2. Linear term annihilation: In Theorem 2.4, choose such that for , and as the zero matrix. Then the resulting is the zero vector of size . We call this operation of constructing the equivalent representation from as linear term annihilation.

    As in the case of diagonal annihilation, under this simple transformation, if is symmetric then is also symmetric. However, properties such as rank, positive (negative) definiteness could be altered by the transformation. If for all then it can be verified that . Similarly, If for all then .

    Again the impact of this transformation on computational performance is not obvious and needs to abe analyzed carefully.

  3. Convexification: In Theorem 2.4, choose such that for where is a nonnegative number, and as the zero matrix. We call this operation of constructing the equivalent representation from as convexification.

    Note that by choosing sufficiently large, we can make positive semi-definite and hence the objective function of the continuous relaxation of the QCOP instance is convex. In this case, it can be verified that . However, we can solve the continuous relaxation of the instance of QCOP in polynomial time whenever have a compact representation using linear inequalities or an associated separation problem can be solved in polynomial time. The transformation could alter the rank and for smaller values of it could affect properties such as positive semidefinite, negative semidefinite etc. of , if existed.

    If is symmetric and not positive semidefinite, choosing to be the negative of its smallest eigenvalue is sufficient to make positive semidefinite [28]. To choose such an , additional computational effort is required. We also discussed earlier various other convexification strategies. However, by the transformation ”convexification” we simply mean the simple operation indicated above by choosing sufficiently large.

  4. Concavification: In Theorem 2.4, choose such that for where is a nonnegative number, and as the zero matrix. We call this operation of constructing the equivalent representation from as concavification.

    Note that by choosing sufficiently large, we can make negative semidefinite and hence the objective function of the continuous relaxation of the QCOP instance is concave. In this case, QCOP is equivalent to its continuous relaxation. To see this, consider the quadratic programming problem (QPP())

       Minimize
    Subject to

    Since large, there always exist a binary optimal solution to QPP when is a polyhedral set. Thus QPP() is equivalent to QPP() which is obtained from QPP() by replacing with . Now replacing by (which is valid for binary variables) in the objective objective function of QPP() and simplifying, we get the instance of QCOP.

    This observation shows that solving the continuous relaxation of is as hard as solving QCOP.

  5. Triangularization: In Theorem 2.4, choose such that for and such that

    Then the resulting matrix is upper triangular with its diagonal elements are zeros. We call this operation of constructing the equivalent representation from as triangularization. Again, triangularization could affect rank and properties such as positive (negative) semidefiniteness, if exists for .

Applying the Theorems discusses above, in different combinations, many simple equivalent representations of a QCOP can be developed and studied. Since our focus in this paper is on computational effects of simple and commonly used equivalent representations, to manage the study effectively, we restrict to ourselves to six equivalent transformations obtained by symmetrization, diagonal annihilation, linear term annihilation, convexification, concavification, and triangularization in appropriate combinations.

2.1 Equivalent representations and the natural lower bound

Effects of equivalent representations on lower bounds obtained by continuous relaxations of a QCOP was discussed in the previous subsection. Let us now consider another lower bound which is a generalization of the well known Gilmore-Lawler [24, 37] lower bound for the quadratic assignment problem and its variations [3]. We call this bound a natural lower bound for QCOP.

For let

Also, let

Theorem 2.12.

is a lower bound for the optimal objective function value of QCOP.

Proof.

Let be an optimal solution to the QCOP instance and . Then

Similarly, one can show that and the result follows. ∎

Note that each of the values for and and can be identified by solving an associated linear combinatorial optimization problem (LCOP). Thus, to identify the natural lower bound for a QCOP, we need to solve LCOP. If the is symmetric, and in this case, one need to solve only LCOP to identify the natural lower bound. For problems such as quadratic assignment or quadratic spanning tree, this LCOP can be solved efficiently in polynomial time. However, for some other examples such as the quadratic traveling salesman problem and the quadratic set covering problem, this LCOP itself is NP-hard. In such cases, one may be interested in using lower bounds on and and/or lower bounds for and . More specifically,

For let

Also, let

and

Theorem 2.13.

, and are lower bounds for the optimal objective function value of the QCOP. Further, .

Note that the lower bound can be identified in polynomial time since we are solving at most linear programming problems under suitable assumptions on . The bound is better than but needs to solve linear programs and two LCOPs.

Corollary 2.14.

The natural lower bound and its relaxations as discussed in Theorem 2.13 obtained using any of the equivalent representations of a QCOP are lower bounds on the optimal objective function value of the QCOP.

It is not difficult to construct examples where different equivalent representations of the same QCOP having different natural lower bound values. This makes it interesting to identify which equivalent representation is preferred in terms of obtaining stronger lower bounds. The effectiveness of these lower bounds and their relative computational benefits will be discussed in section 3.2.3. Various extensions of the Gilmore-Lawler lower bound for the QAP are known in literature [24, 37, 41] For the sake of brevity, we do not study them here and restrict our experiments to the basic natural lower bound.

3 Computational Experiments

In this section we present results of extensive computational experiments carried out using common and well known equivalent representations of QCOP. These include selected representations generated by symmetrization, diagonal annihilation, linear term annihilation, convexification, concavification, and triangularization and their combinations. The quadratic set covering problem is used to generate test instances. We want to emphasize that our experimental study considers representations that are commonly used and those identified without significant computational efforts. Our goal is to identify a preferred representation for the general purpose quadratic 0-1 programming problem solvers within CPLEX and GUROBI among such equivalent representations. Consequently, representations that require solving semidefinite programs or equivalent Lagrangian problems are not considered in this experimental analysis. We used the 0-1 quadratic programming solvers of CPLEX12.5 and GUROBI6.0.5 to solve the test instances. The programs are coded in C++ and tested on a PC with windows 7 operating system, Intel 4790 i7 3.60 GHz processor and 32 GB of RAM. For CPLEX and GUROBI, time limit parameter is set to 3 hours and all other parameters are set to their default values. For statistical analysis we use non-parametric statistical test ”Wilcoxon Signed Rank Test” [48] with SPSS, a commercial statistical software. For all experiments, we use quadratic set covering instances as test problems.

The quadratic set covering problem (QSCP) can be defined as follows. Let be a finite set and be a collection of subsets of . Let be the index set of elements of . For each element , a cost is assigned and for each element , a cost is also assigned. A subset of is a cover of , if . The the quadratic set covering problem is to select a cover such that is minimized. Choosing and as the family of all covers of , QSCP can be viewed as a special case of QCOP.

Let be an