1 Introduction
Given an undirected graph , an independent set in is a subset of the set of its vertices such that no two of these vertices are connected by an edge in
. The problem of finding an independent set of maximum cardinality in a graph, the Maximum Independent Set problem (MIS), is one of the most fundamental NPhard combinatorial optimization problems. Already Karp proved in his famous paper
[Karp72] that the decision version of the Maximum Clique problem, which is equivalent to MIS on the complement graph, is NPcomplete. Because of its hardness of computation, we are interested in polynomial time approximation algorithms for the MIS problem. We say that a polynomial time algorithm for MIS is an approximation algorithm if it finds an independent set in the input graph of size at least , where is the size of the maximum size independent set in the graph. The number , which may be constant or may depend on the input graph’s parameters, is called an approximation ratio, guarantee or factor.We are interested in MIS on graphs with maximum degree bounded by . This problem is known for its inherent hardness of approximation guarantee. Even if , MIS is known to be APXhard, see [AlimontiK00]. There are also explicit constant hardness ratios known for small constant values of [Chlebiks2003]. As grows, there are stronger, asymptotic, hardness of approximation results known: , under the Unique Games Conjecture [AKS2011], and , assuming that P NP [SOChan16]. The best known polynomial time approximation ratio for this problem for small values of is arbitrarily close to , see [Berman1999, Berman:1994:AMI:314464.314570, Chlebik2004]. This is achieved by a local search approach at the expense of huge running time, e.g., [HalldorssonRadha1994], where is the number of vertices in the graph. The best known asymptotic polynomial time approximation ratio for MIS is based on semidefinite programming relaxation [Halperin02]. However, the best known asymptotic approximation ratio for MIS is with running time [Bansal:2015:LTF:2746539.2746607]. In this paper we are primarily interested in MIS on graphs with small to moderate values of .
Probably the best known algorithmic paradigm to find large independent sets is the minimum degree greedy method, which repeatedly chooses a minimum degree vertex in the current graph as part of the solution and deletes it and its neighbors until the remaining graph is empty. This basic algorithm is profoundly simple and timeefficient and can be implemented to run in linear time. The first published approximation guarantee of this greedy algorithm for MIS we are aware of can be inferred from the proof of the following conjecture of Erdős, due to Hajnal and Szemerédi [HajnalS1970, Berge1973]: every graph with vertices and maximum degree can be partitioned into disjoint independent sets of almost equal sizes. The approximation ratio of greedy has been improved to by Simon [Simon90]. The best known analysis of greedy by Halldórsson and Radhakrishnan [Halldorsson1997, HalldorssonR1994] for MIS implies the approximation ratio of , and better ratios are known for small values of .
Halldórsson and Yoshihara [10.1007/BFb0015418] asked in their paper the following fundamental question: what is the power of the greedy algorithm when we augment it with an advice, that is, a fast method that tells the greedy which minimum degree vertex to choose if there are many? They, for instance, proved that no advice can imply a better than approximation of greedy for MIS with . On the other hand, they provide an advice for greedy that implies a approximation, see algorithm MoreEdges in [10.1007/BFb0015418], and an improved approximation, see algorithm Simplicial in [10.1007/BFb0015418].^{1}^{1}1We have found a counterexample to their claimed ratio of by Simplicial, see example in Figure 4. Simplicial may choose recursively the top vertex in those instances, which leads to a solution where the approximation ratio tends to , when tends to infinity. This counterexample has also been verified and confirmed by Halldórsson [Halldorsson2019]. Using our new techniques, we can prove that Simplicial achieves an approximation ratio of for MIS on subcubic graphs. Technically, Simplicial is not a greedy algorithm however, because due to its branchy reduction it might have iterations where it does not choose the current minimum degree vertex [Halldorsson2019]. In fact this results has been retracted by Halldórsson, see [Halldorsson2019]. Thus, is the best known to date bound on the approximation ratio of greedy in graphs with maximum degree at most , which are also called subcubic graphs.
Motivation. In addition to its simplicity and time efficiency, the greedy algorithm for MIS is also important in its own right. Following Halldórsson and Radhakrishnan [Halldorsson1997], greedy algorithm is known to have several important properties: it achieves the celebrated Turán bound [Turan1941, Erdos1970], and its generalization in terms of degree sequences [Wei1981], it achieves a good graph coloring approximation when applied iteratively as a coloring method [DSJohnson1974]. Finally, the greedy algorithm finds optimal independent sets in trees, complete graphs, seriesparallel graphs, cographs, split graphs, regular bipartite graphs, and graphs with maximum degree at most [Halldorsson1997, BODLAENDER1997101]. Another important but nonexplicit class of graphs for which greedy is optimal is the class of wellcovered graphs, introduced by Plummer [Plummer70], and widely studied, see [Plummer93] for a survey. A graph is wellcovered if all its maximal independent sets have the same size. In particular, because any greedy set is maximal, the greedy algorithm is optimal on such graphs. Furthermore, the greedy algorithm finds frequent applications in graph theory, helping to prove that certain classes of graphs have large independent sets, e.g., it almost always finds a approximation to MIS in a random graph [McDiarmid84], or it provides an independent set of size at least in random cubic graphs with probability tending to as , the number of vertices, tends to [FriezeS94].
1.1 Our new results
Positive results: upper bounds. We study the design and analysis of greedy approximation algorithms with advice for MIS on bounded degree graphs. Our main technical contribution is a new class of payment schemes for proving improved and tight approximation ratios of greedy with advice. With our new payment schemes we obtain the best known analyses of the greedy algorithm on bounded degree graphs, which significantly improve on the previously known analyses. As a warmup, we first apply these new techniques to MIS on graphs with maximum degree bounded by any to obtain the following results:

A simple and short proof of the approximation ratio of any greedy algorithm (i.e., without any advice), the result proved previously by Halldórsson and Radhakrishnan [Halldorsson1997, HalldorssonR1994]. We extend a lower bound construction of Halldórsson and Radhakrishnan [Halldorsson1997] to prove that any greedy algorithm (with any, even exponential time, advice) has an approximation ratio at least .

A simple proof of the approximation ratio of any greedy algorithm on trianglefree graphs with maximum degree , which improves the previous best known greedy ratio of [HalldorssonR1994] for MIS on trianglefree graphs. Compared to the proof in [HalldorssonR1994] which uses a technique of Shearer [Shearer83], our proof is extremely simple and short.
We see that as increases, there is no hope in obtaining significantly better approximation than by using any, even exponential time, advice for greedy. This motivates us to focus on the small values of . Indeed, we have to develop our payment scheme techniques significantly more compared to the above applications to MIS on graphs with maximum degree for any value of . In particular, we obtain the following results for MIS and Minimum Vertex Cover (MVC) problems:

We completely resolve the open problem from the paper of Halldórsson and Yoshihara [10.1007/BFb0015418] and design a fast, ultimate advice for greedy obtaining a approximation, that is, the best possible greedy ratio for MIS on subcubic graphs. A lower bound of on the ratio of greedy with any, even exponential time, advice on such graphs was proved in [10.1007/BFb0015418], and the best previously known ratio of greedy was [10.1007/BFb0015418]. Halldórsson and Radhakrishnan [Halldorsson1997] also prove a lower bound of for any greedy algorithm that does not use any advice for MIS on subcubic graphs. Our new greedy approximation algorithm has running time , where is the number of vertices in the graph. For comparison, the best known algorithm for this problem is a local search approximation algorithm of Berman and Fujito [Berman1999], and with an analysis from [HalldorssonRadha1994] has a running time no less than . Specifically, if the approximation ratio of this local search algorithm is fixed to , then the running time is , see [Chlebik2004].

We obtain a greedy approximation algorithm for MIS on subcubic graphs with linear running time, . By using our payment scheme, we can also provide a simple proof of a approximation ratio of the greedy algorithm called MoreEdges in [HalldorssonRadha1994], which was the best previously known approximation ratio of greedy for MIS on subcubic graphs.

Then, we also obtain a fast time approximation for the MVC problem on subcubic graphs. The previous best algorithm for this problem was a approximation with a running time of at least [HalldorssonRadha1994]. Even obtaining the approximation for MVC on subcubic graphs required a running time of [Chlebik2004].
To prove these results we develop a payment technique to pay for the greedy solution via a specially defined class of potential functions. For this new class of potentials on subcubic graphs, we develop a very specific inductive process, which takes into account “parities” and priorities of the reductions performed by greedy, to prove that the value of the potential is kept locally to be at least . An additional, global argument is required to show that the global potential is at least . For more details about our new techniques, see Section 1.2.
Negative results: lower bounds. We complement our positive upper bound results with impossibility, lower bounds, results which suggest that our upper bounds on the design of good advice for greedy are essentially (close to) best possible, or nontrivial computational problems. We believe that this also suggests that the design of good advice for greedy is a nontrivial task on its own.
Let us first observe that a solution output by greedy is a maximal independent set. A graph is called wellcovered if all of its maximal independent sets are of the same size, see [Plummer70, Plummer93]. Caro et al. [CARO1996137] study the computational complexity of the problem of deciding if a given graph is wellcovered. They prove that this problem is coNPcomplete even on free graphs.
To prove our lower bounds we resort to a notion which captures the essence of greedy (the wellcovered property reveals only a very restricted feature of greedy). Namely, we study the computational complexity of computing a good advice for the greedy algorithm for MIS. Towards this goal, Bodlaender et al. [BODLAENDER1997101] defined a problem called MaxGreedy, which given an input graph asks for finding the largest possible independent set obtained by any greedy algorithm. Thus, MaxGreedy asks for computing the best advice for greedy, i.e., one that leads to the largest possible greedy independent set. They proved that the problem of computing an advice which finds an approximate solution to the MaxGreedy problem is coNPhard for any fixed rational number and that this problem with remains NPcomplete [BODLAENDER1997101].
We significantly improve the previously known results on the hardness of computing good advice for greedy, by obtaining the following new results:

We prove that the MaxGreedy problem is NPcomplete even on cubic planar graphs. This significantly strengthens the NPcompleteness result by Bodlaender et al. [BODLAENDER1997101] who prove it on arbitrary, not even boundeddegree, graphs. This result suggests that the problem of designing and analysing good advice for greedy even on cubic planar graphs is difficult.

We further prove that MaxGreedy is even NPhard to approximate to within a ratio of for any by a reduction from 3SAT, and hard to approximate to within under the exponential time hypothesis. We extend this construction to the class of graphs with bounded degree . We prove that MaxGreedy remains hard to approximate to within a factor on this class, nearly matching the approximation ratio of the greedy algorithm in this class.

We prove that the MaxGreedy problem remains hard to approximate on bipartite graphs. This is quite interesting because it is well known that the MIS problem is polynomially solvable on bipartite graphs.
Finally, we extend a lower bound construction of Halldórsson and Radhakrishnan [Halldorsson1997] to prove that any greedy algorithm (with any, even exponential time, advice) has an approximation ratio at least on graphs with maximum degree .
1.2 Our technical contributions
Our main technical contributions are a class of potential functions and payment schemes, together with an inductive proof technique that are used to pay for solutions of greedy algorithms for MIS. These new techniques lead to very precise and tight analyses of the approximation ratios of greedy algorithms.
Here we will explain intuitions about our proof of the approximation ratio of the greedy algorithms on subcubic graphs, which uses the full technical machinery of our approach. Let be a given input graph with an optimal independent set . Greedy algorithm executes reductions on , i.e., a reduction is to pick a minimum degree vertex in the current graph (root of the reduction) into the solution and remove its neighbors, see Figure 1 for examples of reductions. Suppose the first reduction executed by greedy is and it is bad: its root has degree , and both neighbors of are in . Then, locally, the approximation ratio is . To bring the approximation ratio down to , we must prove that, in the future, there will exist equivalent of at least three reductions, called good, each of which adds one vertex to the solution and removes only one vertex from . Moreover, for each executed bad reduction, there must exists an unique (equivalent of) three good reductions.
Consider, for example, the family of instances of MIS in Figure 4 where the base graph is . There, black vertices belong to , while white do not. This class of instances is due to Halldórsson and Yoshihara [10.1007/BFb0015418]. Any greedy algorithm executes on this instance many bad reductions, but only at the very end, good reductions, triangles, are executed. It can easily be checked that there is just enough good reductions to uniquely map three of those to any executed bad reduction (in fact in the whole process there is exactly one good reduction that is unused). This essentially shows a lower bound of on the ratio of any greedy when tends to infinity. We see that the “payment” for bad reductions arrives, but very late! Such a “payment” may not only be late, but we also do not know when “good” reductions providing such payment are executed. Thus good reductions might be very irregularly distributed. For instance, suppose that the first reduction in on Figure 4, let us call it , has two of its contact edges (these are the four edges going down from ’s two black vertices) going to an identical white vertex, creating a follow up reduction of degree one. Then that degree one reduction is good and when executed, it can immediately (partially) pay for the bad reduction .
Question: How do we prove an existence of such a highly nonlocal and irregular payment scheme? We will define a special potential of a reduction, see Section 3.1, which will imply the existence of two sources of “payments” – in the past, from the very first executed reduction, and – in the future, from the executed good reductions. Moreover, our potential will be defined in such a way that each executed reduction can in some sense be “almost paid for” locally, so that at every point in time we will keep the value of the potential of each connected component of at least . For example in the instance from Figure 4 the execution of the first bad reduction in graph creates connected components each isomorphic to and then greedy executes reductions in each of them independently.
We will have an intricate inductive argument, see the Inductive Lowdebt Lemma 12, showing that an execution of a sequence of reductions in a connected input graph will have the total potential at least . In the induction step, some reductions may create multiple components each with potential . In such cases when we cannot locally obtain potential at least , we will make sure that even before the execution of , such components contain reductions with strictly higher greedy priority than that of , thus leading to a contradiction. Or, reduction can pay for such components. Having proved that the total potential of an execution in the (connected) input graph is at least , we will finally pay for this by the very first executed reduction for which we will show that it will always possess an extra saving of (this is a payment from the past).
Intuitively, our potential will imply that we can ship the payments from good reductions executed anywhere in the graph by the greedy into the places where bad reductions need those payments. Such a shipment is unique, in the above sense that there exists three (equivalent) good reductions per single bad reduction. We will realize this shipment by deferring the need of payment into the future along edges, called contact edges, which are incident to the neighbors of the reduction’s root vertex. These contact edges created by a bad reduction will be called loan edges. Each loan edge created by will have a “dual” edge (physically identical to ), called a debt edge, which will be inherited by the future reductions directly created by via its contact edges.
Our potential of a reduction will account for the number of vertices chosen to the solution, removed from , plus the number of loan edges, minus the number of debt edges. Thus, we will very precisely account for such edges. This process is complicated by the fact that vertices can be black (in , a maximum independent set) and white (outside of ) and whether a reduction is “bad” depends on the distribution of black/white vertices in the reduction. For instance, a reduction like the first one in graph on Figure 4 might have a black root and thus the two root’s neighbors will be white. Such a reduction will in fact be “good” when executed.
Some reductions are “bad” and they create “many” contact edges, like the first reduction in graph on Figure 4 (with white root). Those “many” contact edges, called loan edges, will in the future create some good reductions that will pay for that bad one. Observe that such a reduction has more loan edges than debt edges, so it creates a surplus of credit. On the other hand when the greedy process ends, it can end only with terminal reductions. Those terminal reductions do not have any contact edges, but they have the property that for any white vertex added to the solution, they remove only one black vertex. That is why they can “pay” for the previous bad reductions.
This explains only some intuitions of why such a highly nonlocal and irregular payment scheme can have a chance to succeed. By this intuition our guess was (indeed, confirmed true by our final proof) that a single such contact edge, loan edge, will translate in a onetoone way to a single (equivalent of a) good reduction. Furthermore, and most importantly, this approach will enable us to “predict” the precise future graph structure by using the contact edges. It also enables us to keep track of the past reductions – by keeping track of the debt edges and the current state of the savings. And indeed, we have succeeded in building a theory that delivers a complete and precise such payment scheme.
This approach allows us to achieve a very interesting kind of result here – namely, to (essentially) characterize all graphs that can have negative potential! See the Definition 11 and Lemma 12.
These ideas lead to our analysis which is extremely tight, essentially up to an additive unit in the following sense. We prove that (a version of) our approximate greedy algorithm finds a solution of size at least on any subcubic graph, whereas when run on the lower bound instances of Halldórsson and Yoshihara [10.1007/BFb0015418], our algorithm finds a solution of size precisely . A somehow unusual aspect of our result is that we can prove that any lower bound example that shows exact tightness of our guarantee of must be an infinite family of graphs, see the remark after the proof of Theorem 8.
Motivation. To motivate our upper bound results further, we study the computational complexity of designing good advices for greedy. Given a class of graphs there always exists a family of graphs that implies the worst possible approximation performance of greedy with any, even exponential time advice.
Question: To which extent we can reach this worst case performance by designing a polynomial time advice for greedy? As an example, we have answered this question completely and very precisely in the case where is the class of all subcubic graphs and is the family of instances with base graph and recursively constructed graphs in Figure 4, due to Halldórsson and Yoshihara [10.1007/BFb0015418]. Interestingly, our algorithm, Greedy , outputs the best greedy independent set on those instances and we prove that this independent set is precisely a factor of times the size of the maximum independent set in the instance. However, we prove that in many cases this problem is highly computationally intractable, NPhard, NPcomplete, APXhard and even hard to approximate with respect to possible approximation ratios achieved by greedy with advice, depending on the class of graphs , see Section 6 for details. For instance, the problem of designing the best advice for greedy is NPcomplete even when are all planar cubic graphs, see Theorem 26.
These computational complexity results suggest that the task of the design and analysis of efficient advice for the greedy algorithm is a nontrivial task. On the other hand, indeed, our ultimate analysis of the greedy on subcubic graphs, in Section 4, is quite complex.
Differences between our proof and Halldórsson and Yoshihara [10.1007/BFb0015418]. The success of our approach depends on three parts: the definition of the potential, the design of greedy rules and the analysis for excluding certain reductions and paying for some of them. These three parts interact in a very intricate way. Our definition of potential not only captures itself the graph structure of problematic reductions. Given a current reduction, our potential also captures, by the interaction with other two parts, what is the relation to reductions which were executed previously and will be executed in the future. This is captured by considering the different types of edges, called contact edges, which connect those reductions or sets of reductions.
According to our potential, there are two kinds of reductions that are particularly problematic to deal with. These are odd isolated cycles with maximum independent sets in them, and reductions like reduction (d) in Figure
2, which we call an oddbackbone reduction. Their potential is (for the cycle it can also be , but we can prevent that case). This means that each such reduction when executed would need a unit of payment originated from some good reduction. Consider an instance in Figure 4 when tends to (with the base graph ). Suppose that greedy executes the top bad reduction and then recursively executes the following four created bad reductions. Then at the very end it will reach a collection of cycles and each such cycle will need a payment of . As we see this can only lead to a ratio . Already on any odd cycle, the potential of [10.1007/BFb0015418] tells us that it actually needs a payment of (which is one unit more than our potential); we mention here however that it is not possible to pay units to such odd cycles. Our approach is to either prevent greedy from ever ending up with such isolated problematic odd cycles or to show that we can actually pay for such cycles in some cases. The key to a solution is to carefully prioritize certain reductions that would “break” the cycle before it becomes isolated, or to pay for it when there is a spare reduction that can do so. For the bad oddbackbone reductions in Figure 2(d), observe that we could wisely execute them on a black degree vertex which would make then good. But then how do we know which of these two adjacent degree vertices is black/white? In fact a “branchy” reduction of the Simplicial algorithm in [10.1007/BFb0015418] deals with such reductions, but this makes the algorithm nongreedy. We do not know how to use branchy to obtain a approximation, because it will introduce new degree reductions which then would need to be analyzed.One way, pursued in [10.1007/BFb0015418], is to try to pay for such odd cycles or oddbackbone reductions by some kind of local analysis which tries to collect locally good reductions that can pay. We can show that such local analysis/payment is not possible and a global payment or explicit exclusion of such reductions are necessary.
Instead, what we do is to impose a special greedy order on such oddbackbone reductions, and with this order we prove that we can pay for them whenever they are executed as bad reductions. The source of these payments, however, is nonlocal and our scheme proves their unique existence.
Now, to achieve the above payments or avoid bad reductions, we introduce a powerful analysis tool which is a special kind of reductions. They are called black and white reductions, see Definition 10. We also introduce an inductive process to argue about existence of such reductions in Lemma 12. These techniques will let us prove that when a reduction, say , that cannot pay is executed, there will exist a strictly higher priority reduction (black or white) in the graph even before the execution of . This leads to a contradiction with the greedy order. This argument is quite delicate because their existence depends crucially on what kind of contact edges has. But it also depends on the previously executed reductions.
Most importantly, our definition of the potential is in perfect harmony with our inductive proof, that the potential can be kept locally at value at least . This lets us link the potential directly to the graph structure of the reductions, see Definition 11. And this then lets us characterize the potentially problematic graphs, that is, those with negative potential, which is the core of our proof. The main tool that helps us in this task is our Inductive Lowdebt Lemma, see Lemma 12, which enables us to design the greedy order and characterizes the problematic graphs.
Note that we have managed to prove the existence of appropriate payments coming from good reductions by using only “local” inductive arguments, but our payment scheme is inherently nonlocal. This means that the payment, i.e., good reductions, can reside very far from the bad reductions for which they pay.
1.3 Further related work
In this section we survey some further related work on MIS. It will be selective, because of vast existing literature on approximating MIS. The MIS problem is known for its notorious approximation hardness. Håstad [hastad1999] provided a strong lower bound of on the approximation ratio of the general MIS problem for any , under the assumption that NP ZPP, where is the number of vertices of the input graph. This hardness result has been derandomized by Zuckerman [Zuckerman:2006:LDE:1132516.1132612] who showed that the general MIS is not approximable to within a factor of for any , assuming that P NP. The best known approximation algorithms for general MIS problem achieve the following approximation ratios: by Boppana and Halldórsson [Boppana1992], that was improved to by Feige [Feige04] (this ratio has some additional factors).
The first known nontrivial approximation ratio for MIS on graphs with maximum degree is acquired by Lovasz’s algorithmic proof [LOVASZ1975269] of Brooks’s coloring theorem.
The best known asymptotic polynomial time approximation ratio for MIS, i.e., when is large, is based on semidefinite programming relaxation [Halperin02]. However, if we allow for some extra time, there exists an asymptotic approximation algorithm with running time [Bansal15]. And in particular, the best known asymptotic approximation ratio for MIS is with running time [Bansal:2015:LTF:2746539.2746607]. For small to moderate values of , Halldórsson and Radhakrishnan [Halldorsson1994Removal, HalldorssonRadha1994], via subgraph removal techniques, obtain an asymptotic ratio with running time for relatively small , and for larger with linear running time.
Demange and Paschos [DEMANGE1997105] prove that a approximation ratio can be obtained in time , but this ratio is asymptotic as . They also prove that an approximation ratio can be achieved in time , for any fixed integer . This second ratio is also asymptotic. Those algorithms do not apply to , but to quite large values of . Khanna et al. [Khanna1998] obtained a approximation for , with running time by local search.
For any values of , Hochbaum [HOCHBAUM1983243], using a coloring technique accompanied with a method of Nemhauser and Trotter [Nemhauser1975] obtained an algorithm with a ratio of . Berman and Fürer [Berman:1994:AMI:314464.314570] designed a new algorithm whose performance ratios are arbitrarily close to for even and for odd . Berman and Fujito [Berman1999] obtained a better ratio which is arbitrarily close to . Finally, in the latest results from Chlebík and Chlebíková [Chlebik2004], their approximation ratio is arbitrarily close to , which is slightly better than the previous results. These algorithms are based on local search and they have huge running times.
For the case of subcubic graphs with , MIS is known to be NPhard to approximate to within . [Chlebiks2003]. Hochbaum [HOCHBAUM1983243] presented an algorithm with ratio. Berman and Fujito [Berman1999] obtain a ratio with a huge running time. Even a tighter analysis from [HalldorssonRadha1994] the time complexity appears to be no less than . Chlebík and Chlebíkova [Chlebik2004] show that their approximation ratio is arbitrarily close to , which is slightly better than . Moreover, the time complexity of their algorithm is also better. Specially, if the ratio is fixed to , then the running time is . Halldorsson and Radhakrishnan [Halldorsson1994Removal] provide another local search approach based on [Berman:1994:AMI:314464.314570] and obtain a ratio of in linear time, and a ratio in time . Halldórsson and Yoshihara [10.1007/BFb0015418] present a linear time greedy algorithm with an approximation ratio of .^{2}^{2}2They also claim a better ratio of in linear time, however, they have retracted this result [Halldorsson2019].
For the minimum vertex cover (MVC) problem in general, Garey and Johnson [GAREY1976237] presented a approximation algorithm on general graphs. For MVC on subcubic graphs, Hochbaum [HOCHBAUM1983243] provided a approximation ratio, by using the method of Nemhauser and Trotter [Nemhauser1975]. Berman [Berman:1994:AMI:314464.314570] gives a ratio and by the same approach. And [Chlebik2004] shows that a ratio which is slightly better than can be obtained. These algorithms are based on local search and have huge running time.
2 Definitions and preliminaries
Given a graph , we also denote and . For a vertex , let and denote respectively the open and closed neighborhood of in . The degree of in denoted is the size of its open neighborhood. More generally, we define the closed (resp. open) neighborhood of a subset as the union of all closed (resp. open) neighborhoods of each vertex in .
A graph is called subcubic or subcubic is its maximum degree is at most . If the degree of each vertex in a graph is exactly then it is called cubic.
Given an independent set in , we call black vertex a vertex in and a white vertex otherwise. We denote by the independence number of , that is the number of black vertices when is of maximum size in .
3 Greedy
The greedy algorithm, called a basic greedy, or just Greedy , on a graph proceeds as follows. It starts with an empty set . While the graph is non empty, it finds a vertex with minimum degree in the remaining graph, adds this vertex to and removes and its neighbors from . It is clear that at the end, is an independent set. Let be the ordered output. Let denote the graph after removing vertex and its neighboring vertices. More precisely, and , where is a vertex in that satisfies .
Each iteration of the algorithm is called a basic reduction, denoted by , which can be described by a pair . An execution of our greedy algorithm is the ordered sequence of basic reductions performed by the algorithm.
To analyse an execution, we will only require local information for each basic reduction. Given a basic reduction , we call its root vertex, its neighbors the middle vertices, and together they form the ground of the reduction, namely the set of vertices which are removed when the reduction is executed, written . Vertices at distance two from the root are the contact vertices. The set of contact vertices is denoted by . Then, the edges between middle and contact vertices are called contact edges.
From now on, we will consider that two basic reductions and are isomorphic if there exists a onetoone function such that , and are adjacent in if and only if and are adjacent in , and if each middle vertex is incident in to the same number of contact edges than in . Finally, the degree of a basic reduction is defined as the degree of its root vertex.
Figure 1 presents a table of all possible basic reductions of degree at most two in subcubic graphs. Notice that the middle vertices must have degrees equal to or greater than the degree of the reduction.
3.1 Potential function of reductions
Suppose that we are given an independent set in a connected graph and an execution of a greedy algorithm on the input graph . This execution is associated to a decreasing sequence of subgraphs of :
where is the induced subgraph of on the set of vertices .
Given a basic reduction , we define loan edges of as all contact edges with a white contact vertex. Notice that the middle vertex of a loan edge can either be either black or white. The loan of reduction , denoted by corresponds to its total number of loan edges.
We also define the debt of a white vertex in the ground of as the number of times this vertex was incident to a loan edge, let us call it , of a reduction that was previously executed. Such loan edge is also called a debt edge of reduction . It turns out that the debt of a white vertex corresponds exactly to the difference between its degree in the original graph and in the current graph . Similarly, we define the debt of a reduction as the sum of the debts of the vertices of its ground.
Given two parameters , we now define the exact potential of a reduction , for , as
The exact potential of an execution is the sum of the exact potential of all reductions:
Since the independent set produced by the greedy algorithm is maximal and the total debt and the total loan are equal we obtain the following property.
Proposition 1.
Given an execution , we have: .
Proof.
By the definition of the exact potential, this can easily be seen by a simple counting argument. ∎
Suppose we want to analyse the approximation ratio of a greedy algorithm for a given class of graphs . Then if we manage to find suitable values such that all possible reductions have non negative potential, then a direct corollary of Proposition 1 is that Greedy is an approximation algorithm in .
In order to measure the potential of each reduction, we now define a new potential, called simply potential that is a lower bound on the exact potential. This lower bound is obtained by supposing that the debt of each white vertex is maximal, or equivalently, that its degree in the original graph was equal exactly to .
Then we define the potential of reduction , which is now independent from the original graph, as
To evaluate the potential of a reduction, we do not need anymore to know the set of reductions previously executed but simply the structure of the graph formed by the vertices at distance two from the root, and also which vertices are black/white, which reduces to a relatively small number of cases. We define the potential of an execution similarly. Obviously, this new potential is a lower bound on the exact potential defined previously, and more precisely, we have the following fact.
Claim 2.
Let be a graph with maximum degree , an independent set in and an execution in . Then,
Proof.
By the definition of the two potentials, this can easily be seen by a simple counting argument. ∎
3.2 Warmup I: New proof of ratio for greedy on degree graphs
Halldórsson and Radhakrishnan [Halldorsson1997, HalldorssonR1994] proved that for any graph with maximum degree , the basic greedy algorithm obtains a approximation ratio. In here, we present an alternative proof for the same result, but using our payment scheme. Our proof will be simpler and shorter compared to the proof in [Halldorsson1997].
Let us use the potential from the previous section, with parameters and where if , and otherwise. The choice of the value is simply to ensure that the potential value is integer. As we remark before, if we can prove that the potential of any reduction is nonnegative, then the approximation ratio of Greedy in graphs with maximum degree is .
Lemma 3.
Let be a graph with maximum degree . For any basic reduction and any independent set we have
where if , and otherwise.
Proof.
Let be the set of all possible basic reductions, and let be a maximum independent set in the input graph. We note that although there are many types of reductions in , their structure is highly regular. The idea of the proof is to find the worst type reduction and show that its potential is nonnegative. Observe that, if we want to find a reduction to minimize the potential, , such reduction intuitively needs more debt edges and vertices in and less loan edges. Also, if is the root of reduction , then for each , if , then , by the greedy rule. For any reduction , let be the number of vertices in and let be the number of vertices in . We have the following formulas:
We will justify these bounds now. Let be the current graph just before is executed. Note first that the degree of the root of is . The lower bound on depends on the vertices in , by the definition. By the greedy order, for each of vertex , . There are at most vertices not in which can be connected to , thus, the total number of loan edges of is at least , and we have such vertices. Note that in this argument we have possibly missed all loan edges that are contact edges of with both end vertices from . The upper bound on depends on , the degree of the root vertex and the number vertices not in . The number of debt edges is at most , as otherwise it violates the greedy order, and we have vertices not in .
Let . Then, the question now is to find the minimum value of with constrains . We will first prove that for any . For any fixed and let us treat the function as a function of . We know that it is a parabola with the global minimum at point such that , which gives us that . Plugging into , we obtain the following function:
Similarly as above for any fixed , we see that the function as a function of is a parabola with the global minimum for such that , which gives us that . Plugging in we obtain the following:
From the above we have that for any .
Now, let us observe that if , then with is an integer whenever and are integers. This means that in those cases we have which implies that . In case when , we have that with is an integer whenever and are integers. This again means that in those cases , again implying . ∎
Corollary 4 ([Halldorsson1997]).
For MIS on a graph with maximum degree , Greedy achieves an approximation ratio of .
This theorem implies only an approximation of for subcubic graphs. To do significantly better we need a stronger potential, better advice for greedy and a new method of analysis.
3.3 Warmup II: Proof of ratio for greedy on degree trianglefree graphs
Let us use the potential from Subsection 3.2, but with parameters and . For any basic reduction and any independent set we have:
If we prove that for any reduction , and any independent set , then the approximation ratio of Greedy on trianglefree graphs with maximum degree is . Let us first assume that the root vertex of is white. Then, in analogy to Subsection 3.2 we have:
Note that the upper bound on debt edges is the same, however, the lower bound on loan edges is significantly greater than the bound in Subsection 3.2. We will justify this first lower bound. Observe, that for a trianglefree graph, for any reduction , no two middle vertices of are adjacent. Note first that the degree of the root of is . We will obtain a lower bound on by counting the number of contact edges incident on any middle vertex of . By the greedy order, for each such vertex , , where is the current graph. Because the roof of was assumed to be white, there is at most one vertex (the root vertex) not in which can be connected to . Thus, the total number of loan edges of is at least , and we have such vertices. Then we obtain further that:
Let . Then, by using the same approach as in Subsection 3.2 we obtain that for any . Let now be the degree of the root of . If , then and . Suppose now that , then obviously . In such case, because , , thus:
where the first inequality follows by the fact the quadratic function is minimized for .
Let us now assume that the root of is black. Noting that and , we obtain:
where the last inequality follows by the fact that the quadratic function is minimized for . Thus we proved:
Theorem 5.
For MIS on a trianglefree graph with maximum degree , any greedy algorithm achieves an approximation ratio of .
4 Subcubic graphs
The exact potential that we use for subcubic graphs is given by the values and . The table in Figure 2 shows the potential of several basic reductions for some different independent sets. Unfortunately, as one can see in Figure 2, there exists reductions with negative potential. The goal of our additional advice for greedy will be to deal with these cases. The first step is to collect some consecutive basic reductions into one extended reduction so that the potential of some basic reductions is balanced by others. For instance, one way to deal with the basic reduction 2.d in Figure 1, which can have potential (see in Figure 2), is to force Greedy to prioritize a vertex of degree two with a neighbor with degree three. Therefore, if at some point the reduction 2.d is executed it means that the current graph is a disjoint union of cycles. This allows us to consider that the whole cycle forms an extended reduction — that we will call as cycle reduction — and we will see later that its potential is now at least . This advised greedy algorithm, called MoreEdges in [10.1007/BFb0015418], improves the approximation ratio from to in subcubic graphs. This result can easily be proved by using our potential function with parameters . Such approximation simply follows from the fact that all reductions have now nonnegative potential.
An useful observation in order to define an appropriate extended reduction is to notice that the (basic) path reduction (1.b from Figure 1) has potential at least zero. This observation suggests to introduce the following notion. Given a graph we will say that the set is a backbone if the induced subgraph is a path and if and have both degree three. In this case, and are called the endpoints of the backbone . Moreover, when is odd (resp. even), we will say that is an even (resp. odd) backbone — notice the asymmetry — which corresponds to the parity of the number of edges between the endpoints. As an example, the ground of the basic evenbackbone reductions (2.f and 2.c in Figure 1) are special case of an evenlengthbackbone (of edgelength two).
4.1 Extended reductions
An extended reduction is a sequence of basic reductions of special type that we will precisely describe in the next paragraph. All different extended reductions are summarised in Proposition 6. To facilitate the discussion, when there is no risk of confusion, we will simply call it a reduction. The size of an extended reduction , written is the number of executed basic reductions. Its ground naturally corresponds to the union of the grounds of its basic reductions, and its root is the same as the root of the first basic reductions. Finally, the contact vertices corresponds to all contact vertices of its basic reductions that are not in . The degree of a reduction is the degree of the first executed basic reduction. All basic reductions of Figure 1 except 2.d will be considered as (extended) reductions of size one. In particular, all (extended) reductions of degree one considered by the algorithm have size one. Other considered (extended) reductions of degree two have a ground which is a backbone (except the case of oddbackbone where one endpoint is excluded). When the two endpoints of the backbone are the same vertex, it corresponds to a loop reduction. Otherwise, reductions associated to an even and odd backbone are respectively called evenbackbone and oddbackbone reductions. When these reductions have size at least two, they correspond to a sequence of basic reductions where:

The first (basic) reduction is 2.e from Figure 1.

Intermediate reductions , with , are basic path reduction (1.b from Figure 1), where the root vertex of is the contact vertex of .

The final (basic) reduction corresponds to:

branching (1.c) or path (1.b), when is an evenbackbone reduction. The case path, occurs when the endpoints are adjacent.

path (1.b), when is an oddbackbone reduction.

point (0.a) or edge (1.a), when is a loop reduction, depending on the parity of the length of the backbone. Recall that the two endpoints are identical in this case.

We give examples of different types of extended reductions of degree two in Figure 3. Some further remarks are in place here:

The following basic reductions in Figure 1 are special case of (extended) reduction of size one.
 2.a :

cycle reduction.
 2.c and 2.f :

evenbackbone reduction.
 2.e :

oddbackbone reduction. This applies only when the rightmost contact vertex has degree three.
 2.b :

loop reduction.

The root of an evenbackbone, oddbackbone or loop reduction is always the neighbor of one of the endpoints of the backbone. For evenbackbone and loop reduction of evenlength, any of the two choices leads to the same solution. In the case of an oddloop reduction, the size of the solution — and therefore also the potential — and the ground of the reduction is exactly the same. For loop and evenbackbone reductions, the ground of the reduction is the full backbone. However, for oddbackbone reductions, given one backbone, there are two distinct possible root, associated to two distinct ground. For each oddbackbone reduction, only one endpoint is contained in the ground. See Figure 3.

All basic reductions of an extended reduction, except the first one have degree at most one, so that at any given moment, any executed basic reduction has minimum degree in the current graph. This means that we are allowed to execute the full extended reduction without violating our original greedy rule.
In what follows, when we refer to an extended reduction in two different (and equivalent) ways. We will either write its name with the first capital letter or we will write its name with the first lowercase letter followed by the word “reduction”. Thus, for example, we will say a loop reduction or just Loop, or an evenbackbone reduction or just Evenbackbone, etc. Note that basic reductions are special case of extended reductions, and therefore they also may follow this convention.
4.2 Ultimate advices for Greedy
We now describe the additional rules used to reach the best possible approximation. This advised greedy algorithm will be called Greedy . The first of these rules is to execute basic reductions such that the obtained sequence can be grouped in a sequence of extended reductions as described above. This is justified by Proposition 6. This choice is always possible since all basic reductions from Figure 1, except 2.d, are special cases of extended reductions. In the case where any minimum degree vertex is the root of a basic reduction 2.d, the graph must be a disjoint union of cycles. In this case we are able to execute Greedy so that its execution corresponds to a sequence of cycle reductions. This argument leads to Proposition 6.
Proposition 6.
For each subcubic graph with minimum degree at most two, it is always possible to execute one of the following (extended) reductions:
Point  Edge  Path  Branching  Loop  Cycle  Evenbackbone  Oddbackbone.
Greedy order.
When several choices of reductions are possible, Greedy will have to select one with the highest priority, according to the following order from the highest to lowest priority:

Point, Edge, Path, Branching,

Cycle or Loop,

Evenbackbone,

Oddbackbone.
Any two reductions among the first group or any two reductions among the second group can be arbitrarily executed first, as soon as both have the minimum degree. We say that a reduction is a priority reduction if there exists no reduction in the same graph with strictly higher priority. Thus a priority reduction is one of the highest priority reductions in the current graph. One implication of this order is that when an Evenbackbone is executed, it means that the current graph does not contain any degree one vertices, or any loop reduction. Additionally, when the priority reduction is Oddbackbone, the graph does not contain any Evenbackbone. These structural observations will be useful later.
When the priority reduction is an evenbackbone or an oddbackbone reduction, Greedy applies the following two additional rules.
Evenbackbone rule.
Suppose that the priority reduction in the current graph is the evenbackbone reduction, and several choices are possible. Unfortunately, picking arbitrarily one of these reductions can lead to a solution with poor approximation ratio. For instance, consider the graph in Figure 4, highly inspired by [10.1007/9783540277965_5].
It turns out that the difficulty comes from the fact that executing an evenbackbone reduction can split the graph into several connected components, each of them having a negative potential. To address this issue, we want to make sure that we are able to “control” the potential of almost all of these connected components. This right choice, followed by Greedy , is given by the following lemma. For any reduction in a graph we will say that creates connected components if they are the connected components of the graph . Intuitively, it suffices to execute an evenbackbone reduction such that all other evenbackbone reductions are all present in the same connected component created by .
Lemma 7.
Let be a connected graph, with no degree one vertices, and no loop reduction. Let be the set of all evenbackbone reductions in . Each evenbackbone reduction has two root vertices and . In the case when has only one root, we set . Then, there exists one evenbackbone reduction, say that satisfies the following property. Let be the connected components created by , with . Then either , or and then the following is true. If there exist for some and , such that and (in words: and belong to two different connected components among ), then at least one of is a contact vertex of .
Proof.
(Lemma 7) Let be any degree three vertex. Consider a graph obtained from by replacing each backbone from to by a single degree two vertex which is also called . On this contracted graph, let denote the shortest path distance (i.e. with minimum number of edges) between vertices in . Now, let us pick the root in that has the largest distance from . Without loss of generality this is : . Denote by the connected components created after executing the corresponding evenbackbone reduction . At most one connected component, say , contains . Suppose that there is another connected component, i.e., , that contains a vertex . Any path from to intersects , including the shortest one, and . It follows that and have one common neighbor , so that . In particular, in the original graph , (or ) is at distance two from (or ) which means that this vertex is a contact vertex of . ∎
Notice that this proof is constructive and allows us to find the appropriate Evenbackbone in time .
Oddbackbone rule.
Suppose now that the priority reduction is the oddbackbone reduction. In this case, Greedy chooses the one that was created latest. More formally, suppose that we are given a partial execution in a graph such that the priority reduction in is an oddbackbone reduction, where is the subgraph of obtained from after the execution of , for . We associate to each vertex of degree two a creation time , such that is the greatest integer such that had degree three in . Moreover, if had already degree two in the original graph , then set . When , this means that was a contact vertex of th reduction. Then, the creation time of an (odd) backbone is the greatest creation time over all vertices of degree two in .
Greedy picks the oddbackbone reduction that has the greatest creation time, among all possible oddbackbone reductions. If several choices are possible, it can pick any of them.
We believe that this rule is not necessary in the sense that it does not improve the approximation ratio. However, this rule makes the algorithm easier to analyse. Intuitively, with this rule, we can not have several successive reductions with negative potential within the same connected component.
Rule for cubic graphs.
When the input graph is cubic, i.e. each vertex has degree exactly three, then the first reduction has degree three. However, this is the only degree three reduction executed during the whole execution since there will always be a vertex with degree at most two. In such a situation, we guess the first degree three vertex to pick, so that the potential of the associated execution is positive. By guessing, we mean choosing any single fixed vertex and then trying all four executions, each starting from a vertex in the closed neighborhood of vertex . We show later that the first step can only increase the total potential of the whole sequence. After this step, all reductions have degree at most two, and therefore in the following sections, we will always consider graphs with at least one vertex of degree at most two.
It is clear that the set returned by Algorithm 1 is an independent set. In the next section, we establish its approximation ratio.
4.3 Analysis of the approximation ratio
Theorem 8.
Greedy is a approximation algorithm for MIS in subcubic graphs.
Let is a sequence of extended reductions performed by Greedy on an input graph . In order to analyse the approximation ratio of Greedy , we use our potential function in subcubic graphs () with parameters . Given an independent set in , the potential of an (extended) reduction is
We start by looking at the potential of (extended) reductions.
4.3.1 Potential of extended reductions
Claim 9.
For any independent set
we have the following potential estimates for the reductions:
Path  
Oddbackbone  Evenbackbone 
For basic reductions, one can easily check by hand all possible cases. Notice that the worst case potential always arises when is maximum in the ground of a reduction. Figure 2 presents these worst cases for basic reductions: Edge, Cycle and Oddbackbone. Figure 6 shows the worst potential cases of the remaining basic reductions: Path, Evenbackbone, Loop, Point, and Branching. From the worst case potential of basic reductions, we can lowerbound the potential of (extended) reductions.
Proof.
(Claim 9) It remains to prove lowerbounds for reductions of arbitrary size. See Figure 5. An oddbackbone reduction is a sequence of basic reductions starting with 2.e (in Figure 1), which has a potential at least (Figure 2 (d)), and a certain number of path reductions, with potential at least zero (Figure 6 (a) and (b)), so that the total potential is at least . More generally, the potential of an extended reduction is lowerbounded by the sum of the potentials of the first and the last basic reduction. For Evenbackbone with nonadjacent backbone endpoints, these first and last basic reductions are 2.e () and Branching (, see Figure 6 (g) and (h)) so that the sum is nonnegative.
Consider now a cycle reduction of length . Let us denote by and , respectively, the number of black and white vertices, i.e. and . Since Greedy is optimal in degree at most two graphs and the size of is at most , we have:
(1) 
For Loop and Evenbackbone with adjacent endpoints, simply observe that their ground is a cycle with one or two additional edges. Each of these edges is either a debt edge — the loan increases by one — or the corresponding middle white vertex has now degree three — the debt decreases by one. In any case, the potential increases by the number of added edges, so that we proved what we wanted. ∎
Comments
There are no comments yet.