1 Introduction
In Software Testing, software products may produce system faults due to unexpected interactions between the components. Ideally, one should test all possible combinations of the components. However, the number of total combinations is typically too large to test exhaustively. Empirical studies suggests that a large number of software errors in practice are due to the interaction between relatively small number of parameters [12, 10]. As such, pairwise interaction coverage among the components, and in general, twise coverage, has become an attractive technique for software testing to ensure high quality without the need for exhaustive testing.
As a running example, consider a software developer who wants to ensure that an application at hand runs as expected under different configurations of these 5 components:
An exhaustive test suite, , would require tests in total. It is however possible to cover pairwise interaction of all parameters using only 6 tests:
In this test suite, every column corresponds to a parameter and each row represents a particular test configuration. When the first two parameters; namely Operating System and Browser, are considered, all of the four possible interactions {Windows, Explorer}, {Windows, Firefox}, {MacOS, Explorer}, {MacOS, Firefox} are present in this test suite. In fact, a closer look reveals that the same holds for any pairwise combination of these parameters. In other words, the above test suite guarantees pairwise coverage using 6 tests only. Given the empirical result that many software errors are due to interaction between small number of parameters, generation the minimum test suite with pairwise coverage becomes an important problem. Once pairwise coverage is ensured, to increase the strength of the test suite, one could next search for a set of tests that ensures triplewise coverage, i.e., interaction between any three parameters, and so on. The Covering Array Problem (CAP), CSPLib045, generalizes this concept:
Definition 1 (Covering Array Problem)
^{1}^{1}1 Our definition differs slightly from the original as it swaps the meaning of rows and columns. We found this to work better in the general software development setting as each row now corresponds to a test configuration to run.A covering array of size is defined by the coverage strength , a set of parameters each with possible values that they can take from an alphabet . The covering array has the property that for any distinct columns all possible combinations of values between the corresponding parameters exist at least once in the rows. The covering array number, CAN, is the minimum such b that satisfies this property. Our running example corresponds to CA(2, 5, 2, 2, 2, 2, 2, 2, 6) with CAN=6.
There exists a rich literature on combinatorial software testing. Apart from algebraic methods that can construct covering arrays for some special cases, the approaches for generating test cases broadly fall into two main categories: i) (greedy) heuristic solutions, and ii) declarative methods that take advantage of constraint solving systems, such as SAT or Constraint Programming (CP) solvers. This paper presents an approach that belongs to the second category. In particular, we propose a novel solution based on Column Generation, a wellknown technique from Operations Research (OR) that allows solving largescale optimization problems.
Conceptually, the Covering Array Problem lends itself naturally to the Column Generation (CG). Each test configuration (i.e., each row) contributes partially for the coverage of interactions between parameters. Assuming that we can represent all possible test configurations, the optimization task then becomes selecting the minimum number of tests that would satisfy a given coverage strength. When stated in this way, the problem resembles the standard Set Covering Problem. Surprisingly, this connection has not been studied before in the CAP literature. The main contribution of this paper is to close this gap. Our contributions can be summarized as follows:

We propose a set covering formulation for the covering array problem (§Section 2) and then show how to solve it using Column Generation (§Section 3). To the best of our knowledge, this is the first attempt that employs CG for solving the CAP.

Unlike some of the existing work that are designed for special cases (§Section 4), our approach is not restricted by the number of values allowed for the test parameters and makes no assumption on the coverage strength.

We reflect from our experience in productizing a system based on the proposed algorithm for general usage and highlight practical settings from the realworld where parameterized testing brings value (§Section 6).
2 Integer Programming (IP) Formulation for the CAP
Given an instance , let denote the number of possible combinations of parameters taken at a time, that is the standard choose operator . Now, let be a particular combination with parameters. For this particular selection of parameters, the number of possible interactions, , equals to the cartesian product of all of values of these parameters:
Then, the total number of possible interactions, , can be found as the sum over all combinations of parameters:
To make the presentation more concrete, let us turn to our running example. Figure 1 depicts the possible combinations, , and parameter interactions and . In this case, there are possible pairwise combinations between five parameters. Since each parameter can take a value from a binary domain, each pairwise combination results in possible interactions. Then, the total number of interactions equals to .
The key observation behind the Integer Programming (IP) formulation for CAP is that, given a particular test configuration , we can identify which interactions among this test can cover. This information can be represented by a {0, 1} columnar pattern array, denoted by , where the value at index , , equals to 1 if the test covers the interaction p.
To illustrate the idea, consider the first test configuration {Windows, Explorer, IPv4, Intel, Oracle DB} from our running example. Figure 1 presents the coverage pattern, , of this test configuration. As seen in the Test  1 column of Figure 1, the first test configuration covers 10 of the 40 pairwise interactions.
Given the coverage pattern, , of every test from the exhaustive test set , the IP formulation of CAP asks for the selection of the minimum subset of patterns such that each interaction is covered at least once. More precisely, the IP model introduces a binary decision variable, for every test configuration , denoting whether it is selected in the pattern suite, and enforces a greater or equal to one constraint for each interaction to ensure its coverage. This leads to the standard Set Covering formulation:
(1) 
The inclusion of each test incurs a cost of one, hence more precisely, we are dealing with the unicost set covering problem. While this formulation neatly captures the CAP, notice that the set grows exponentially with the number of parameters. It would be prohibitively large in practice to enumerate this set upfront. Instead, we propose to use Column Generation to optimize the linear relaxation of this IP model.
3 The Column Generation Framework
Column Generation is a decomposition technique that allows solving largescale linear programming problems to optimality. It involves solving a
restricted master problem and then iterating by adding one or more columns. A column is a candidate for being added to the restricted master problem if its inclusion can improve the objective function. If no such improving column exists, then the optimal solution of the restricted master problem is also the optimal solution of the original linear program.3.1 The Master Problem and Its Dual
Let us first consider the linear programming relaxation of (1):
(2) 
The information whether a column can improve the objective value can be derived from the dual problem. Let indicate dual values corresponding to the constraints in (2). Then, the dual of the master problem presented in (2) is:
(3) 
The main idea of column generation is to start with a small subset such that the restricted master problem in (2) is feasible. Then new columns (i.e., new patterns or test configurations) are incrementally added to the master problem until linear relaxation becomes provably optimal. Our next task is to find a column in that could improve the current optimal solution of the linear relaxation.
3.2 The Pricing SubProblem: Generating New Columns
Given the optimal dual solution of (3), the reduced cost of a notyet considered column is:
(4) 
We need to determine whether there exist columns for which the equation (4) is less than 0, i.e, columns with negative reduced cost. If no such column can be found, the current solution is optimal. Finding a column with negative reduced cost is called pricing. While a pricing routine can return any column with a negative reduced cost, one typically searches for the smallest one. In our case, the pricing subproblem becomes:
(5) 
Accordingly, we solve the following maximization problem to generate new columns:
(6) 
The main decision variables of this constraint model are and which correspond to the new pattern and its associated test configuration. These variables are linked together via the Boolean expressions, enforce, as follows. Consider a particular for a combination , interaction , and parameter ; for example, “Windows” as in the first parameter of the first interaction in Figure 1. When the first variable of the is set to zero (meaning ”Windows”), the first enforce Boolean variable becomes true. Then, the pattern index corresponding this interaction can be set to one (meaning covered) if and only if all the values are enforced in the test, and viceaversa.
The objective is to maximize the gains from the generated pattern weighted by the dual information. Conceptually, while the objective function tries to generate as many ones as possible in the pattern array, the enforce variables tie that back to the actual test configuration allowing coverage only for the interactions that are present in the test.
Upon solving this pricing problem given in (6), if there exists a pattern with objective value strictly greater than 1, its flattened out version becomes the new column that can be added to the restricted master problem. Then, the restricted master problem is solved again this time with the new column. Consequently, updated dual information becomes available, which can be fed into the pricing to seek other patterns. The process iterates until no such pattern can be found, in which case we reach at the LP optimum. Finally, we convert the optimum LP and solve it as an IP, which gives us a solution for the original problem. Notice however that the solution of the root node IP obtained from optimal LP might not necessarily be the optimum integer solution.
Overall, this is a hybrid decomposition in which we use Mathematical Programming to solve the master problem, and Constraint Programming to solve the pricing problem. The former drives the minimization objective while the latter applies logical inference. In combination, both approaches work hands in hands on parts of the problem that they are suited the best; optimization for MP and filtering for CP. Notice that the formulation is parameterized such that it can accommodate different values of for coverage strength, and allows heterogeneous alphabets where parameters can have different number of values, . Moreover, the use of CP in the pricing problem allows introducing applicationspecific rules, the socalled side constraints. For example, certain interactions might not be allowed for the application, e.g., MacOS and Explorer need not to be considered for testing.
4 Related Work
There is a vast literature on combinatorial software testing and the problem of determining the minimum size covering arrays has been studied extensively. We discuss only parts of this rich literature due to space limit. A comprehensive survey can be found in [3, 9]. Unlike our approach, some of this work are specifically designed for binary/uniform domains, and/or for pairwise coverage [5, 6, 2]. The closest to our approach are declarative models that take advantage of constraint solvers. Formulations based constraint programming are given in [7, 8, 2]. Complementary to those are heuristic approaches such as AETG [1], IPO [11]. Our work falls in between complete and incomplete methods. It uses a hybrid decomposition that brings together exact MP and CP formulations. While the overall approach is not exact, it still takes advantage of declarative models instead of implementing heuristic solutions from scratch.
Strength: 2  
k  g  CG  CP  HR  SAT 
3  3  9    9  9 
3  4  16    16  16 
3  5  27    25  25 
3  6  38    36  36 
5  2  6    6  6 
5  3  11    15  11 
6  3  13    15  12 
Strength: 3  
k  g  CG  CP  HR  SAT 
4  2  8  8  8  8 
5  2  10  10  12  10 
6  2  12  12  12  12 
7  2  12  12  13  12 
8  2  13  12  13  12 
9  2  17  12  18  12 
10  2  18  12  18  12 
11  2  19  12  18  12 
12  2  21    18  15 
13  2  22    19  16 
14  2  23    19  17 
15  2  24    19  18 
Strength: 4  
k  g  CG  CP  HR  SAT 
5  2  16  16  24  16 
6  2  24  21  28  21 
7  2  26    38  24 
8  2  32    42  24 
9  2  37    50  24 
10  2  40    50  24 
5  3  104    135  81 
5 Numerical Results
We now present preliminary experiments to evaluate the performance of our CG approach in generating covering arrays.
Instances: We consider a mixed set of CA instances with coverage strength , number of parameters , and alphabet size .
Comparisons: We compare our solution with both complete and incomplete methods. In particular, we consider the declarative CP approach proposed in [8]. This constraint formulation is quite involved as it combines two different matrix representation of the problem into one integrated model. It also includes dedicated search goals and a set of symmetry breaking constraints to speed up the process. The incomplete method presented in [8] is based on local search/WalkSAT operating on a SAT encoding of the problem. The other incomplete method, HR, is from [4] which first construct a covering array and then heuristically reorder columns.
Setup: CG experiments were run on a Dell laptop with Intel Core i3 CPU @1.4 GHz 6.00 Gb RAM with 60 seconds runtime limit as dictated by the cloud application. The CP results from [8] were obtained on Pentium M @1.7 GHz with 1 hour time limit, and SAT results were obtained on a Pentium III @733 MHz again with 1 hour time limit.
Initialization of CG: We initialize the CG framework with artificial identity columns each of which covers exactly one interaction from . This ensures the feasibility of relaxed master problem. These columns are associated with a high cost, as such, they are not selected in the final solution.
Table 1 presents our preliminary results. The optimal solutions are given in bold and “” indicates unsolved cases within the time limit. At a first glance, the incomplete SAT approach dominates the results as it produces optimal solutions for the majority of instances. Our CG approach is able to find a solution for all instances, in parts with better bounds than the other incomplete approach, HR. Contrarily, the CP model struggles to find solutions when the alphabet is nonbinary even for relatively small number of parameters. It is however quite effective with binary domains up to 11 parameters.
These preliminary results are promising for two particular reasons. First, different runtime limits might partially explain the quality gap, but more importantly, our current implementation uses a naïve CG initialization: artificial identity columns. This amounts to the total number of interactions, which is quite large. For example in CA(4, 10, 2), there are more than 10000 artificial columns to start with. Despite that, CG quickly brings that number down to 40 columns within 60 seconds.
Typically, competitive CG approaches employ a good heuristic solution as the starting point. In that sense, we do not consider incomplete algorithms (such as the effective SAT local search here) as a competitor to CG. On the contrary; we can benefit from existing good heuristic solutions. Finally, our results are for the IP solution of the root node, which can still be embedded into BranchandPrice. Overall, this formulation opens the door to bring together the efficiency of heuristics with the completeness of exact methods. Equipped with the best warmstarting heuristic and embedded into BranchandPrice, column generation holds potential to uncover new optimal solutions. This remains to be seen and is our ongoing work.
6 Cloud Service
Figure 2 presents part of our web service which exposes declarative constraint models to broader audiences. As one of the five winners of the Oracle Fusion Cloud Application Challenge, the tool helps developers to specify a number of components that they want to test with their possible number of values, and in return, can download a JUnit compliant readytorun parameterized test suite based on the minimum test set found. The following reflects our experience using the web service:

While the search space of CA grows exponentially, CAN grows slower as can be observed in Table 1. In most cases the optimal value remains the same even though increases considerably. This is a desired property in practice.

As shown in Figure 2, the individual contribution of each test to the twise coverage reveals an interesting diminishing returns property. While the first two tests bring in 25% coverage each, the third test only covers an additional 20% due to duplicate interactions. The contributions continue to decrease as tests are added, which restates that, more tests does not necessarily mean better coverage. This lead to postinvestigation of existing test suites. As a result, tests that did not improve coverage were identified and eliminated which further reduce test cost.

The generated test cases might be counter intuitive. In this example, Tests {0,0,0,0,0} and {1,1,1,1,1} are not present in the solution which puzzles developers. A hybrid approach that combines blackbox testing with domain knowledge is necessary.

Prime candidates for parameterized testing include; i) UI tests with as language and internalization components, and ii) configuration of patches to ensure fastest regression upon bug fixes.

There is further interest in embedding the tool within popular IDEs such as Netbeans or Eclipse as a parameterized test suite creation widget.
References
 [1] M. L. Fredman, G. C. Patton, S. R. Dalal, and D. M. Cohen. The aetg system: An approach to testing based on combinatorial design. IEEE Transactions on Software Engineering, 23:437–444, 1997.
 [2] A. Gotlieb, A. Hervieu, and B. Baudry. Minimum pairwise coverage using constraint programming techniques. In Fifth IEEE International Conference on Software Testing, Verification and Validation, ICST 2012, Montreal, QC, Canada, April 1721, 2012, pages 773–774, 2012.
 [3] A. Hartman. Software and hardware testing using combinatorial covering suites. In Graph Theory, Combinatorics and Algorithms. Operations Research/Computer Science Interfaces Series, volume 34. Springer, 2005.
 [4] A. Hartman and L. Raskin. Problems and algorithms for covering arrays. Discrete Mathematics, 284(13):149–156, 2004.
 [5] A. Hervieu, B. Baudry, and A. Gotlieb. PACOGEN: automatic generation of pairwise test configurations from feature models. In IEEE 22nd International Symposium on Software Reliability Engineering, ISSRE, pages 120–129, 2011.
 [6] A. Hervieu, D. Marijan, A. Gotlieb, and B. Baudry. Practical minimization of pairwisecovering test configurations using constraint programming. Information & Software Technology, 71:129–146, 2016.
 [7] B. Hnich, S. D. Prestwich, and E. Selensky. Constraintbased approaches to the covering test problem. In Recent Advances in Constraints, pages 172–186, 2004.
 [8] B. Hnich, S. D. Prestwich, E. Selensky, and B. M. Smith. Constraint models for the covering test problem. Constraints, 11(23):199–219, 2006.
 [9] D. R. Kuhn, R. C. Bryce, F. Duan, L. S. G. Ghandehari, Y. Lei, and R. N. Kacker. Combinatorial testing: Theory and practice. Advances in Computers, 99:1–66, 2015.
 [10] R. Kuhn, R. Kacker, Y. Lei, and J. Hunter. Combinatorial software testing. IEEE Computer, 42(8):94–96, 2009.
 [11] Y. Lei, R. Kacker, D. R. Kuhn, V. Okun, and J. Lawrence. IPOG/IPOGD: efficient test generation for multiway combinatorial testing. Softw. Test., Verif. Reliab., 18(3):125–148, 2008.
 [12] C. Nie and H. Leung. A survey of combinatorial testing. ACM Comput. Surv., 43(2):11:1–11:29, 2011.