1 Introduction
The set of decision problems is the set of all problems for which any instance has an answer of YES or NO. Among this set is the subset of nondeterministic polynomial time problems, denoted which are decision problems with the following property. Consider any instance of an
problem. If the instance has the answer YES, it should be possible to provide evidence of that answer which can be checked by a deterministic Turing machine in time bounded above by a polynomial function in the input size of the instance. Such evidence is called a
certificate.One of the most important problems is boolean satisfiability, which asks whether a set of boolean variables can be assigned values of TRUE or FALSE in order to make a given logical expression evaluate to TRUE. Certainly the problem is in , because if the answer is YES, simply providing such an assignment of values for each boolean variable suffices as a certificate. Any logical expression can be written in conjunctive normal form, that is, it is a conjunction of clauses which, in turn, only involve OR and NOT connectives.
In 1971, the famous CookLevin theorem [1] demonstrated that any problem can be reduced to boolean satisfiability in conjunctive normal form (SAT) in the following sense. Consider any problem . Then there exists a polynomialtime algorithm which accepts, as input, any instance of , say , and outputs a new instance of SAT with input size bounded above by a polynomial function in the input size of . Then and have the same answer, and if the answer is YES, there exists another polynomialtime algorithm which accepts, as input, any valid certificate of , and outputs a valid certificate of .
A corollary of the CookLevin theorem is that SAT is, in the worst case, at least as difficult, up to polynomial equivalence as any other NP problem. There is another set of problems called NPhard problems which is the set of problems that are at least as difficult as the most difficult problem(s). The intersection of and the set of NPhard problems is called the set of NPcomplete problems, which therefore includes boolean satisfiability, as well as many other problems. If a reduction can be constructed from any problem known to be NPcomplete to another problem in , then that second problem is hence proved to be NPcomplete as well.
The first major study into this field was by Karp [4], who in 1972 provided a list of 21 NPcomplete problems (including SAT) by describing twenty such reductions, starting by reducing SAT to three other problems, then reducing those problems to yet more problems, and so forth. A visualisation of the reduction tree is shown in Figure 1. We note that Karp’s set of 21 problems includes many that have been studied intensively in the context of optimisation, even though in the present context they are cast as decision or feasibility problems. In the time since Karp’s paper, interest in NPcomplete problems has exploded. We refer interested readers to Papadimitriou’s book [5] on the topic.
The interest in NPcomplete problems stems from the open question of whether there exists any polynomialtime algorithms to solve them. Since any problem can be reduced to any NPcomplete problem, a polynomialtime algorithm for even one NPcomplete problem would prove the existence of polynomialtime algorithms for all problems. However, to date no such algorithm has been found. This question is captured in the famous P vs NP problem, which asks whether , and the set of decision problems which are decidable in polynomial time, are equivalent.
Irrespective of whether a polynomialtime algorithm could be discovered for NPcomplete problems, there is a second concern which, to date, has been largely ignored. Specifically, the question of how large the resultant instance is after a reduction is performed. Although it is, by definition, bounded above by a polynomial, the leading coefficient and the order of the polynomial may be large. Indeed, even if they are relatively low, the reduction from one problem to another may require several intermediate reductions, which compounds the input size of the final instance. In this context, the efficiency of performing the reductions is less important than the resultant size, as the total time taken to perform the reductions is merely the sum of the individual reduction times.
If , all NPcomplete problems have exponential solving time in the worst case, and the merit of reducing any one of them to another problem is therefore lost if the input size grows too rapidly. Reductions usually only need to be performed for relatively large instances, since smaller instances can typically be solved by existing (exponentialtime) algorithms. Even if , it is vital that a reduction does not inflate the input size of the instance too dramatically. Consider the following extremely optimistic prospect: a polynomialtime algorithm is discovered for an NPcomplete problem that is guaranteed to conclude after iterations. Suppose you then wish to solve another NPcomplete problem, but the reduction results in quadratic growth in input size. Even for a very modestly sized problem, say , the resultant instance would take roughly iterations to solve, which is likely to be impractical. Hence, the order of the polynomial that bounds the input size of the new instance should be as small as possible; ideally, the polynomial should be linear, or at worst quasilinear.
The above argument is the motivation for introducing the following definition.
Definition 1.1.
Consider two problems and . If a reduction exists from to such that the input size of is bounded above by a linear function of the input size of , then we say that lies in the linear orbit of .
Obviously, if is in the linear orbit of , and in turn is in the linear orbit of , then is also in the linear orbit of , so the property is transitive. However, it is not necessarily commutative. For example, boolean satisfiability with three literals per clause (3SAT) is known to be in the linear orbit of Hamiltonian cycle problem (HCP), but HCP is not known to be in the linear orbit of 3SAT, and indeed, it seems unlikely that it is. For completeness, we say that a problem is in its own linear orbit.
Then, consider a subset of called , defined in such a way that any problem in is in the linear orbit of at least one problem in , and is minimal. Certainly it seems reasonable that research efforts should be primarily focused on problems since these are the problems with the most potential scope for practical use. Indeed, if an efficient algorithm is developed for a problem with a large linear orbit, then all of those problems within its linear orbit can leverage off this algorithm as well, without needing to be concerned with such explosive growth as the example given earlier. Of course, a natural question to ask is whether is finite. Alternatively, if is not finite, what proportion of does it occupy?
In this manuscript, we focus on a more modest question, as a case study: if we consider solely the set, , of Karp’s 21 NPcomplete problems, how small a kernel subset can we identify which possesses the property that all 21 problems lie in the linear orbit of at least one problem in ? The 21 NPcomplete problems described by Karp are as follows:
(1) Boolean satisfiability in conjunctive normal form (SAT)
(2) 01 Integer Programming
(3) Clique
(4) Set Packing
(5) Vertex Cover
(6) Set Covering
(7) Feedback Node Set
(8) Feedback Arc Set
(9) Directed Hamiltonian cycle problem (DHCP)
(10) Undirected Hamiltonian cycle problem (HCP)
(11) SAT with at most 3 literals per clause (3SAT)
(12) Chromatic Number
(13) Clique Cover
(14) Exact Cover
(15) Hitting Set
(16) Steiner Tree
(17) 3Dimensional Matching
(18) Knapsack
(19) Job Sequencing
(20) Partition
(21) Max Cut
We provide reductions to demonstrate that can be reduced to cardinality six, specifically problems (2), (7), (10), (12), (13) and (19). We also discuss an ambiguity in the definition of input size that makes it unclear whether we should consider (12) and (13) as lying in each other’s linear orbit. If so, then can be reduced to cardinality five. From an optimisation perspective, it is natural to assume that optimisation versions of problems in (for instance, see the discussion of complexity in Cook et al [2]) may also be good surrogates for the optimisation versions of problems in their respective linear orbits.
Dealing with inequality constraints in 01 Integer Programming
Throughout this manuscript, the majority of conversions will be to 01 Integer Programming. Using the definition given by Karp, only equality constraints are permitted. However, it will often be convenient to use inequality constraints. Of course, it is always possible to reduce an inequalityconstrained integer program to an equalityconstrained integer program through the use of slack and surplus variables. However, since we only permit binary variables, sometimes many slacks and surpluses will be needed. It is important to check very carefully how many slacks are required to ensure we do not exceed linear growth in the input size.
Consider the following example:
where are positive integers. Assuming that are binary variables, it is clear the LHS must be between 0 and . When converting this constraint to an equality constraint, we must first ask ourself: what is the maximum difference between the LHS and RHS for which the inequality is still satisfied? It is clear that this situation occurs when the LHS is 0. Then we need to use as many slack variables as necessary to handle this situation. Define . Then we can rewrite the above constraint as:
It is easy to check that this constraint can be satisfied if and only if are chosen to satisfy the original inequality constraint. In the process of converting to an equality constraint, we introduced new variables, and new nonzero entries into the constraints coefficients matrix. The nonzero entries are respectively. These can be encoded in binary using bits respectively. Hence, converting such an inequality constraint to an equality constraint increases the input size by .
For problems where can grow with the size of the problem, care needs to be taken to ensure that this has not rendered the conversion superlinear. For example, suppose the input size of the original problem is . If then the above conversion produces constraints that require memory to encode, and hence is not linearlygrowing. Likewise, if and there are inequality constraints like the above, then after converting we require memory to encode them all, and hence it also is not linearlygrowing.
In the conversions that follow, we will consider these situations on a casebycase basis to confirm that no such issues arise. Obviously, the same argument as above can be made when converting greaterthan inequality constraints, using surplus variables, as well.
Note that if the maximum difference between the LHS and RHS in a valid solution is bounded above by a fixed constant, then the input size is increased by , and therefore does not prevent the conversion from being linearlygrowing in any situation. In such a case we will say that the inequality constraint is constantbounded.
LinearlyGrowing Reductions
Unless otherwise stated, all reductions in this paper are original. Those reductions which are not original all come from Karp’s paper.
Satisfiability to 3SAT (Karp)
Satisfiability: Can a set of literals be assigned values of TRUE or FALSE so as to satisfy a set of clauses?
Input: clauses and literals. Each clause is of size .
Input size:
3SAT: Can a set of literals be assigned values of TRUE or FALSE so as to satisfy a set of clauses which all have cardinality 3?
Conversion: Produce a new instance of 3SAT by constructing new clauses for each , where , as follows:
If : Simply repeat .
If : Introduce new variables and construct new clauses: .
Explanation: The intention is for the set new clauses to all be satisfiable if and only if the original clause is satisfiable. Consider the case where one of the literals in , say , is assigned a value of TRUE. Then it can be seen that assigning TRUE, and FALSE satisfies all the clauses.
Next, consider the case where all of the literals in are assigned a value of FALSE. From it is clear that must be assigned a value of TRUE. However, then from , we see that must also be assigned a value of true. Inductively, it follows that TRUE for . However, this implies that clause is not satisfied. Hence we conclude that the new clauses are all satisfiable if and only if the original clause was satisfiable.
Final Input Size: Consider each clause with . It is clear that in the new instance this has been replaced with new clauses which each contain 3 literals. This implies that each clause (with input size ) has been replaced by new clauses with total input size . Therefore an upper bound on the input size of the converted problem is . It is clear that the input size of the converted problem is a linear function of the original input size.
Clique to 01 Integer Programming
Clique: Does a graph contain a clique (ie set of mutually adjacent vertices) of size ?
Input: Graph containing vertices and edges. Positive integer .
Input size: .
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for each edge in the graph,

for each vertex in the graph.
Then the 01 Integer Program is described by the following constraints:
(1)  
(2)  
(3)  
(4)  
(5) 
Explanation: The intention is for variables to be 1 if vertex is included in the clique, and 0 otherwise. Likewise, variables will be 1 if edge is included in the clique, and 0 otherwise. Constraints (1)–(3) ensure that if and only if both and . Therefore the variables associated with edges between vertices in the clique are set to 1. Constraint (4) ensures that exactly vertices are in the clique, and constraint (5) ensures that there are sufficiently many edges between those vertices to constitute a clique.
Final Input Size: It can be checked that the constraints coefficients matrix will contain nonzero entries, and the RHS will contain entries. Note also that the RHS of constraint (5) will be less than and can therefore be encoded in at most bits. All of the inequality constraints are constantbounded. It is clear that the input size of the converted problem is a linear function of the original input size.
Set Packing to 01 Integer Programming
Set Packing: Is it possible to select mutually disjoint sets from a family of sets?
Input: Family of sets , with total sets and each set containing entries from a universe set . Positive integer .
Input size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Define to be 1 if set contains , and 0 otherwise. Then the 01 Integer Program is described by the following constraints:
(6)  
(7) 
Explanation: The intention is for variables to be 1 if set is one of the mutually disjoint sets, and 0 otherwise. Constraint (7) ensures exactly sets are selected. Constraints (6) ensure that each entry of appears no more than once in the selected sets.
Final Input Size: It is easy to check that the number of nonzeros in the constraints coefficients matrix will be , and the RHS will contain entries. Constraints (6) are constantbounded inequalities. It is clear that the input size of the converted problem is a linear function of the original input size.
Node Cover to Set Covering (Karp)
Node Cover: Is it possible to select no more than nodes in a graph such that every edge in is incident with at least one of the selected nodes?
Input: Graph containing vertices and edges. Positive integer .
Input size: .
Set Covering: Is it possible to select no more than sets from a family of sets , such that the union of the selected sets is equal to the union of all sets in ?
Conversion: Produce a new instance of Set Covering, in which each set contains elements taken from the set of edges in in the following manner. For each , the set contains the edges incident with node . Finally, assign (the constant in Set Covering) to be equal to .
Explanation: Each set corresponds to a node in . Since , we can only select as many sets as we can select nodes. Then once sets are selected, a set covering is obtained if and only if every element in the sets is now covered. This corresponds to the situation where every edge in the graph is incident with at least one of the selected nodes.
Final Input Size: The number of entries over all of the sets will be , and the one constant input is precisely equal to . It is clear that the input size of the converted problem is a linear function of the original input size.
Set Covering to 01 Integer Programming
Set Covering: From a family of sets, is it possible to select no more than sets such that their union is equal to the union of all sets in the family?
Input: Family of sets , with total sets and each set containing entries from a universe set . Positive integer .
Input size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Define to be 1 if set contains , and 0 otherwise. Then the 01 Integer Program is described by the following constraints:
(8)  
(9) 
Explanation: Although we only require at most sets, the problem generalises to choosing exactly sets. Then, the intention is for variables to be 1 if set is to be one of the sets chosen, and 0 otherwise. Constraint (9) ensures exactly sets are selected. Constraints (8) ensure that each entry of appears in at least one of the selected sets.
Final Input Size: It is easy to see that the number of nonzeros in the constraints coefficients matrix will be , and the RHS will contain entries. Constraints (8) are not constantbounded, so we consider them individually. For a particular choice of , the difference between constraint the LHS and RHS could be as large as . It is then clear that converting all constraints in (8) to equality constraints increases the size of the conversion by , which is certainly no bigger than the input size. It is clear that the input size of the converted problem is a linear function of the original input size.
Feedback Arc Set to Feedback Node Set
Feedback Arc Set: Given a directed graph, is it possible to select a set of no more than arcs such that every (directed) cycle in the graph travels through at least one of the selected arcs?
Input: Graph containing vertices and (directed) arcs. Positive integer .
Input size: .
Feedback Node Set: Given a directed graph, is it possible to select a set of no more than nodes such that every (directed) cycle in the graph travels through at least one of the selected nodes?
Conversion: We first convert the instance to an equivalent instance with nicer properties. Define the “path graph” to be a graph containing vertices and arcs for every pair satisfying . Now consider the original graph, say , and expand it in the following way. Suppose vertex has indegree and outdegree . Then replace vertex with , ensuring that each arc incident on is now incident on a unique vertex of . If there is a feedback arc set of size no more than in the original graph, there will be an equivalent one in this new graph, and vice versa. The new graph has bounded indegree and outdegree of 3, and no more than vertices. Hence there will be no more than arcs in the new graph. Call the new graph .
Then the line graph of constitutes an instance of Feedback Node Set with the identical choice of .
Explanation: It is clear that a feedback node set in the line graph corresponds to a feedback arc set in . The reason the conversion to is performed first is to obtain a sparse graph. This ensures the line graph is also sparse, and has size which is a linear function of the number of edges in , which in turn is linear in . Hence we only need to show that has a feedback arc set of size no more than if and only if does.
It is clear that we can obtain a feedback arc set of from any feedback arc set of by simply selecting the corresponding arcs in , so the proof in one direction is trivial. Now consider the other direction. We will now view a feedback arc set as a set of arcs that may be removed, leaving a directed acyclic graph. Hence we can restrict our consideration to the cycles in . The only “new” cycles in (ie those that don’t have a corresponding cycle in ) are those that are created by being allowed to visit one or more path graphs multiple times. These effectively correspond to a union of cycles in . Hence if there is no feedback arc set of size that removes all the cycles in , there is definitely none of size in either.
Final Input Size: As argued above, the line graph of will be sparse (ie have indegree and outdegree bounded by a constant) and will have a vertex set of cardinality a linear function of . It is clear that the input size of the converted problem is a linear function of the original input size.
Directed HCP to Undirected HCP (Karp)
Directed HCP: Does a given directed graph contain a simple cycle that visits every vertex?
Input: Graph containing vertices and (directed) arcs.
Input size: .
Undirected HCP: Does a given undirected graph contain a simple cycle that visits every vertex?
Conversion: Define subgraphs for to be 3vertex subgraphs each containing edges and . We then construct a new instance of Undirected HCP by replacing each vertex in with . Then for each directed arc , the new instance contains an edge going from vertex 3 of to vertex 1 of .
Explanation: The second vertex in each is a degree 2 vertex, and so it is clear that any time vertex of any is reached, vertices 2 and then 3 of the same must immediately follow. Then it is only possible to exit each via an edge incident on the third vertex, which corresponds to an arc that departs vertex in . Likewise, each time a is exited, another is entered via vertex 1, which corresponds to an arc that enters vertex in . Then it is clear that Hamiltonian cycles in have a 11 relationship with the Hamiltonian cycles in the converted instance.
Final Input Size: Each directed arc in now has a corresponding edge in the converted instance. In addition, there are two extra edges for each . It is clear that the input size of the converted problem is a linear function of the original input size.
3SAT to 01 Integer Programming (Karp)
3SAT: Can a set of literals be assigned values of TRUE or FALSE so as to satisfy a set of clauses , where for all ?
Input: clauses and literals. Each clause is of size .
Input size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Suppose the entries in clause are . Define to be the number of complemented variables in , and define as follows:
Then the 01 Integer Program is described by the following constraints:
(10) 
Explanation: The intention is for variables to be 1 if literal is to be assigned TRUE, and 0 if literal is to be assigned FALSE. Then for each clause we want at least one of the literals to have the desired value. If the literal is not complemented, we include the variable . If it is complemented, then we include . Rearranging this, we see that for each clause, we must satisfy constraint (10). Likewise, if constraint (10) is satisfied, then there is a valid assignment of literals that satisfies all of the clauses.
Final Input Size: The number of nonzeros in the constraints coefficients matrix is , and there are RHS entries. Constraints (10) are all constantbounded. It is clear that the input size of the converted problem is a linear function of the original input size.
Exact Cover to 01 Integer Programming
Exact Cover: Given a family of sets, is it possible to select a subfamily of mutually disjoint sets whose union is equal to the union of all sets in the family?
Input: Family of sets , with total sets and each set containing entries from a universe set .
Input size: .
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Define to be 1 if set contains , and 0 otherwise. Then the 01 Integer Program is described by the following constraints:
(11) 
Explanation: The intention is for variables to be 1 if set is to be included in the exact cover, and 0 otherwise. Then constraints (11) ensure that each entry in appears precisely once in the selected sets.
Final Input Size: The number of nonzeros in the constraints coefficients matrix is precisely equal to , and there are RHS entries. It is clear that the input size of the converted problem is a linear function of the original input size.
Hitting Set to 01 Integer Programming
Hitting Set: Given a family of sets, is it possible to construct a new set such that the intersection between the and any of the given sets has cardinality 1?
Input: A family of sets , with total sets and each set containing entries from a universe set .
Input size: .
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Define to be 1 if set contains , and 0 otherwise. Then the 01 Integer Program is described by the following constraints:
(12) 
Explanation: The intention is for variables to be 1 if they are to be included in , and 0 otherwise. Then constraints (12) ensure that each set contains precisely one of the selected variables.
Final Input Size: The number of nonzeros in the constraints coefficients matrix is precisely equal to , and there are RHS entries. It is clear that the input size of the converted problem is a linear function of the original input size.
Steiner Tree to 01 Integer Programming
Steiner Tree: Given a graph , weights for each edge, and a set of vertices in the graph , is it possible to find a subtree in containing all vertices in such that the total weight of the tree no more than a given value ?
Input: Graph containing vertices and edges. Set of vertices . Set of weights made up of values of the form . Positive integer .
Input size: .
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Then, produce a new instance of 01 Integer Programming by introducing binary variables:

for each edge ,

for ,

for .
Define to be the set of vertices adjacent to , that is if and only if edge exists in . Then the 01 Integer Program is described by the following constraints:
(13)  
(14)  
(15)  
(16)  
(17)  
(18)  
(19) 
Explanation: The intention is for variables to be 1 if edge is to be used in the subtree, and 0 otherwise. The variables are oriented in the sense that edges must emanate from lower in the tree, so for example, if vertex is the root vertex and edge appears in the tree, then but . Variable is designed to be 1 if vertex is the root of the subtree, and 0 otherwise. Similarly, variable is designed to be 1 if vertex is any member of the subtree other than the root vertex, and 0 otherwise.
Constraint (13) ensures there is only one root. Constraints (14) ensure that every vertex in is either the root of the subtree, or another member of the subtree. Constraints (14)–(15) ensure that no vertex is viewed as being both the root of the subtree and also another member of the subtree. Constraints (16) ensure that vertices are only seen as (nonroot) members of the subtree if a single edge enters it (as is the definition of a tree). Constraints (17) ensure that any edge may only be used if vertex is a member of the subtree. Constraints (18) ensure that any edges incident on the root vertex and contained in the subtree must only emanate from the root vertex, rather than go to the root vertex.
Constraints (13)–(18) combine to ensure that the set of variables correspond to a valid, connected tree that contains . Then, finally, constraint (19) ensures that the weight of the subtree does not exceed .
Final Input Size: It can be checked that the number of nonzeros in the 01 Integer Program is . Note that in any meaningful example. Inequality constraints (15), (17) and (18) are all constantbounded, but constraint (19) is not. The maximum difference between the LHS and RHS of constraint (19) is , which theoretically can grow infinitely large. However, if is larger than the sum of all weights in the graph, then constraint (19) is satisfied automatically and can be ignored. Hence, we assume that is not larger than the sum of weights, and so converting (19) to an equality constraint will certainly increase the problem size by less than the original input size. It is clear that the input size of the converted problem is a linear function of the original input size.
3Dimensional Matching to 01 Integer Programming
3Dimensional Matching: Given , is it possible to find such that contains entries from , and no two entries of agree in any coordinate?
Input: A family , containing sets (the th set being called ) which each contain three entries. The finite size .
Input size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Define to be 1 if set contains entry in coordinate , and 0 otherwise. Then the 01 Integer Program is described by the following constraints:
(20) 
Explanation: The intention is for variables to be 1 if set is included in , and 0 otherwise. Since there should be sets included in , and no two entries of are to agree in any coordinate, it is clear that the sets of will cover every single entry in for all three coordinates precisely once. Conversely, if every entry in each coordinate appears precisely once, then it must be the case that , as desired. To that end, constraints (20) request that each entry appears precisely once, which will only be possible if a 3dimensional matching can be found.
Final Input Size: The number of nonzeros in the 01 Integer Program is precisely equal to , and there are RHS entries. For any meaningful instance of 3Dimensional Matching, . It it clear that the input size of the converted problem is a linear function of the original input size.
Knapsack to 01 Integer Programming
Knapsack: Given a set of integers and a target value , is it possible to choose some integers from the set so that their sum is equal to ?
Input: A set containing integers, and a target value .
Input size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Produce a new instance of 01 Integer Programming by introducing binary variables:

for .
Denote by the th entry of . Then the 01 Integer Program is described by the following constraints:
(21) 
Explanation: The intention is for variables to be 1 if integer is chosen in the sum, and 0 otherwise. Then, it is clear by definition that the sole constraint (21) describes the Knapsack problem perfectly.
Final Input Size: There are exactly nonzero entries in the 01 Integer Program, and a single RHS entry. Then it is clear that the input size of the converted problem is precisely equal to the input size of the original problem.
Partition to Knapsack
Partition: Given a set of integers, is it possible to choose some integers from the set so that their sum is equal to the sum of the integers not selected?
Input: A set containing integers.
Input size:
Knapsack: Given a set of integers and a target value , is it possible to choose some integers from the set so that their sum is equal to ?
Conversion: Denote by the th entry of . Then the new instance of Knapsack is produced by simply as the new set of integers, and setting .
Explanation: Clearly, if integer can be chosen from such that the total is half of the sum of all integers in , then the remaining integers will also sum up to the same value, satisfying the Partition condition.
Final Input Size: Since is provided for both Partition and Knapsack, the only additional input is . Since is a number less than the sum of all entries of , it can certainly be encoded to be no larger than the input of itself. It is clear that the input size of the converted problem is a linear function of the original input size.
Max Cut to 01 Integer Programming
Max Cut: Given a graph with weights on each edge, and a positive integer , is it possible to select a set of vertices such that the sum of weights on edges with exactly one vertex in is at least as big as ?
Input: Graph containing vertices and edges. Weights for all edges. Positive integer .
Input Size:
01 Integer Programming: Is it possible to satisfy a set of linear equations in binary variables?
Conversion: Define . Then produce a new instance of 01 Integer Programming by introducing binary variables;

for ,

for each edge .
Then the 01 Integer Program is described by the following constraints:
(22)  
(23)  
(24)  
(25)  
(26) 
Explanation: The intention is for variables to be 0 if vertex is to be selected in , and 1 otherwise. Also, is to be set to 0 if edge is added in the sum, and 1 otherwise. The variables are chosen this way so the problem can be reformulated with a lessthan inequality rather than greaterthan. To that end, constraints (22)–(25) are designed in such a way that they can only all be satisfied if the following condition is true: if and only if . Finally, constraint (26) ensures that the weights of all edges that don’t have exactly one vertex in is no bigger than , which is equivalent to the desired condition on .
Final Input Size: It can be checked that the number of nonzeros in the 01 Integer Program is , and there are RHS entries. Constraints (22)–(25) are all constantbounded. Constraint (26) is not constantbounded, and the RHS is a potentially large number not used as input in the original problem, so we consider this constraint individually. Consider first encoding the number . This number will be no larger than the sum of all weights, and so it can be encoded in fewer bits than it takes to encode the weights in the original input problem. Then, it is clear that the maximum difference between the LHS and RHS of (26) is no larger than , so the increase in problem size after converting (26) to an equality constraint is smaller than the original input size as well. It is clear that the input size of the converted program is a linear function of the original input size.
Ambiguity in Input Size
In the previous section we have determined the input size of each problem by considering the amount of data required to store the information that describes the problem instance. However, it is not always clear how this should be computed. For example, is it reasonable to assume a problem is stored in a format that requires preprocessing to be used by an algorithm? Alternatively, is it reasonable to think of input size as the amount of information required to be stored in memory by a standard algorithm for that problem? We highlight this ambiguity with the following example.
Chromatic Number to Clique Cover (Karp)
Chromatic Number: Is it possible to assign a colour to each vertex of a graph such that no two adjacent vertices have the same colour, and the total number of colours is no bigger than ?
Input: A graph containing vertices and edges. A positive integer .
Input size:
Clique Cover: Is it possible to select no more than cliques in a graph such that none of the cliques overlap and the union covers all vertices?
In the following, we provide a conversion for which the input size is ambiguously defined.
Conversion: Produce a new instance of Clique Cover by defining:
the complement of
.
Explanation: Suppose there is a clique cover of containing cliques for and . Then consider the vertices in clique . One can safely colour these vertices the same colour in , as by definition of none will be adjacent to any other in . It is clear that the two problems are then equivalent.
Potential Issue: It is unclear how to define the input size of the resultant problem. Technically, if is a sparse graph, then is a dense graph. In this case, the input size of the new Clique Cover instance would be . However, in this case it is possible to define by storing a sparse amount of information (ie by storing ) and then taking the complement once has been read in. Should the size of the problem be described in terms of the most efficient method of storage, or the number of elements in the problem? If we imagine that we had a black box solver for Clique Cover that required us to input a graph directly, we would not be able to submit a different graph and demand that the solver first complement the graph. Alternatively, if we consider the input size to be the number of elements that will be manipulated by any potential solver, then we would be required to consider the dense graph explicitly.
If we consider the size of the problem being purely the number of bits it takes to encode the problem in its most compressed form, the above constitutes a linearlygrowing reduction. However, if we do not permit special processing of the instance, then it does not.
If is a dense graph, then the above conversion is linearlygrowing regardless of how we store the data.
Discussion and Future Work
In this paper, we introduced the notion of a linear orbit of an problem NPcomplete . Namely, the orbit consists of the set of problems in that can be converted to by a conversion that results in linear growth in the input size. In particular, we showed that there exists a kernel subset of the set of the classical NPcomplete problems stated in Karp’s seminal 1972 paper. Every one of the 21 problems belongs to the linear orbit of at least one of the six problems in .
These results suggest that efficient algorithms to solve problems in may offer opportunities to more efficiently solve the problems in their linear orbits. It is hoped that this can be extended from problems in their decision framework to the more practical optimisation frameworks.
It is worth contemplating how much smaller would be if we were to permit reductions with larger growth. For example, if we permit reductions that result in quasilinear growth, it is possible to reduce HCP to SAT [3]. If we further permit growth, the reduction of Chromatic Number to Clique Cover above is permitted, and it is also possible to show that Clique Cover could be reduced to 01 Integer Programming. If quadratic growth is permitted, Job Sequencing can be reduced to 01 Integer Programming. Clearly this is a topic ripe for future research.
References

[1]
S. Cook,
The complexity of theorem proving procedures,
Proceedings of the Third Annual ACM Symposium on Theory of Computing, (1971), 151–158.
 [2] (MR1490579) W. J. Cook, W. H. Cunningham, W. R. Pulleyblank and A. Schrijver, Combinatorial Optimization, Wiley, New York, 1998.
 [3] A. Johnson. QuasiLinear Reduction of Hamiltonian Cycle Problem (HCP) to Satisfiability Problem (SAT), IP.com, Disclosure Number: IPCOM000237123D, 2014.
 [4] (MR0378476) R. M. Karp, Reducibility Among Combinatorial Problems, Springer, New York, 1972.
 [5] (MR1251285) C. H. Papadimitriou, Computational Complexity, AddisonWesley, 1994.
Comments
There are no comments yet.