1 Introduction
Given a graph , where is the set of vertices of cardinality and the set of edges of cardinality , finding the maximum set of sites no two of which are adjacent is a very difficult task. This problem is known as the maximum independent set problem (MIS). It was shown to be NPhard, and no polynomial algorithm can solve it efficiently. In other words, finding a set of vertices, with the maximum cardinality, such that for every two vertices , there is no edge connecting the two, i.e. , needs a time which is superpolynomial. For example, the best algorithm that is able to solve it efficiently, in polynomial space, needs a time [1].
This problem is important for applications in Computer Science, Operation Research, and Engineering, like graph coloring, assigning channels to the radio station, register allocation in a compiler, etc. Besides having several direct applications [2], MIS is closely related to another wellknown optimization problem, the maximum clique problem [3, 4]. For finding the maximum clique (the largest complete subgraph) into a graph , it suffices to search for the maximum independent set into the complement of .
The maximum independent set has been studied on many different random structures, like ErdösRènyi graphs (ER), regular graphs (RG), random regular graphs (RRG), etc. For the ErdösRènyi class , where
is the probability that two different vertices are connected each other, polynomial algorithms can find solutions only up to half the maximum independent set present, which is
[5, 6]. This behavior also appears for the class of regular and random regular graphs , where is the connectivity of each node. In this cases, no greedy algorithm is known to find an independence ratio , for any , when (after the limit of ) [5]. The independence ratio is defined as the density of the independent set, thus . Moreover, Gamarnik and Sudan [6] showed that, for a sufficiently large value of , local algorithms cannot approximate the size of the largest independent set in a regular graph of large girth more than . The approximation gap was improved by Rahman and Virág [7]. Analyzing the intersection densities of many independent sets in random regular graphs, the authors show with high probability that those densities must satisfy various inequalities. With the help of those inequalities they prove that for any , local algorithms cannot find independent sets in random regular graphs with an independence ratio larger than if is sufficiently large. However, these results appear to say nothing about small and fixed . When is small and fixed, e.g. or , indeed, only lower and upper limits, expressed in terms of independence ratios, are known.The first upper bound for such a problem was given in 1981 by Bollobás [8]. He showed that the supremum of the independence ratios of regular graphs with large girth is less than , in the limit . McKay, 1987, improved and generalized this result to regular graphs with large girth [9], by using the same technique and a much more careful calculation. For example, for the cubic graph (regular graph), he was able to push Bollobás upper bound down to . Since then, however, only for cubic graphs the upper bound has been improved by Balogh et al. [10], namely to .
Remarkable results for lower bounds were obtained mainly by Wormald 1995 [11]. By solving a system of differential equations for computing the independence ratios returned by a prioritized algorithm, he was able to improve the lower bound on the cubic graph given by Shearer [12]. Moreover, he was able to compute lower bounds for fixed . This algorithm is called prioritized by the fact that there exists a priority in choosing vertices that are added to the independent set [13]. It follows the procedure of choosing vertices in the independent set one by one, with the condition that the next vertex is chosen randomly from those with the maximum number of neighbors adjacent to vertices already in .
Improvements over this algorithm were achieved by Duckworth et al. [14]. Since then, however, new lower bounds have been achieved only at small values of , e.g. and . Interesting results at have been achieved by Csóka, Gerencsér, Harangi and Virág [15]. They were able to find an independent set of cardinality up to using invariant Gaussian processes on the infinite regular tree. This result was once again improved by Csóka [16] alone, which was able to increase the cardinality of the independent set on largegirth regular graph up to and on largegirth 4regular graph up to .
These improvements were obtained by observing, broadly speaking, that the size of the structure produced by the algorithm is almost the same for regular graphs of very large girth, as it is for a random regular graph [17]. We recall in Tab. the best upper and lower bounds^{1}^{1}1Recently in [18] has been shown that Monte Carlo methods can outperform any algorithm in finding a large independent set in random regular graphs, in a (using the words of the authors) " running time growing more than linearly in N" [18]. The authors present lower bounds improvements only for and , and those results are obtained experimentally from random regular graphs of order . In this work, however, we are interested in results for greedy algorithms, and, therefore, we compare our results only with the stateoftheart of this kind of algorithms, and not with the ones presented in [18]. for , first and second columns respectively.
In this paper, we study how to compute large independent set in random
regular graphs, presenting experimental results of a greedy algorithm, built upon existing heuristic strategies, which leads to improvements on known lower bounds
[11, 19, 14]. This new algorithm runs in linear time and melds Wormald’s, Duckworth and Zito’s, and Csoka’s ideas of prioritized algorithms [11, 14, 19, 16].3  0.454000  0.445327  0.445321 (3) 
4  0.416350  0.404070  0.400846 (5) 
5  0.384430  0.359300  
6  0.357990  0.332960  
7  0.335670  0.310680  
8  0.316520  0.288000  
9  0.299870  0.271600  
10  0.285210  0.257300  
20  0.197320  0.173800  
50  0.110790  0.095100  
100  0.067870  0.057200 
The paper is structured as follows: we present the algorithm for in Sec. 2, and we introduce experimental results obtained on random regular graphs of sizes ^{2}^{2}2We recall that the order of a graph is the cardinality of its vertices set , while the size of a graph is the cardinality of its edges set . up to . In Sec. 3, we present our algorithm for , and the experimental results associated to it, by finitesize analysis on random regular graphs with sizes up to (see fourth column Tab. ). The list of data used for the analysis is presented in Tab. , Tab. and Tab. at the end of the paper.
2 The local algorithm for
In this section, we present our algorithm, simpler and slightly different from the one in [16], but based on the same idea, for determining large independent set in random regular graphs with connectivity , i.e. . It will also be the core of the algorithm developed in Sec. 3. As mentioned above in the introduction, algorithms discussed in this paper are basically prioritized algorithms, i.e. algorithms that make local choices in which there is a priority in selecting a certain site. Our algorithm belongs to this class.
Defining two separate sets and , where identifies the set of graph nodes that satisfies the property that no two of which are adjacent and its complement, the algorithm takes as INPUT the set of nodes and the set of edges, builds a random regular graph of connectivity and, in run time, returns, as output, the partition of the graph nodes into and , where the cardinality of is maximal. It is equal to and agrees with the lower bound in [16].
When a site is set into , it is labelled with the letter . In contrast, when a site is set into , it is labelled with the letter . We define with the operation of deleting the site from the set , and inserting it into . We define with the degree of a vertex , i.e. the number of links that a site is connected to, while with the antidegree of a vertex , i.e. the number of free connections that needs to complete during the graph building process. Of course, the constraint is always preserved . At the beginning of the graph building process all have . At the end of graph building process all graph nodes will have .
The algorithm starts by randomly taking a site from the set , and by completing its connections in a random way, avoiding selfloop. For building the graph we used the method described in [11], and introduced in [8] ^{3}^{3}3More precisely, we take points, must be even, in urns labelled , with in each urn, and choose a random pairing of the points such that . Each point is in only one pair , and no pair contains two points in the same urn. Moreover, no two pairs contain four points from just two urns. For building the graph, then, we connect two distinct vertices and if some pair has a point in urn and one in urn . The conditions on the pairing prevent the formation of loops and multiple edges [11]. This method can be easily generalized to any random regular graph, by substituting the value with .. Once all its connections are completed, site has and . It is labelled with letter , erased from , and set into . In other words, operation is applied on it. Each neighbor of , i.e. , where the set contains all neighbors of , has degree and antidegree . We identify with the letter each site that has , and we put all of them into a set . In general, defines the set of sites , which satisfy the property of having .
Then, the algorithm picks a site from , starting from the one with minimum antidegree. If has , the site is set into , and it is removed from and . If has , the algorithm completes all its connections, and removes it from . Each site connected with a is automatically labelled with the letter . If a site connects to another site , with , is removed from and it is labelled .
A structure is called ComPaCt, and is equivalent to a single virtual site, , which has an antidegree equal to the sum of all antidegrees of sites that compose site , i.e. . The number of sites is equal to the cardinality of . The degree of is . In Fig. 1 is shown the operation of how a ComPaCt structure is created from a site with and two sites with . During the graph building process the two free connections of site are completed, , by connecting sites and . By definition, sites and are labelled with , and their antidegrees are reduced of one unit each. The resultant structure is a virtual node with .
Each virtual site with , is put into the set . The set identifies the set of virtual sites. During the graph building process two ComPaCt structures could have the possibility to merge together, creating a new virtual site . An example is shown in Fig. 3
. Let’s imagine that we are at one general moment of the algorithm where two sites
, with antidegree , need to be covered. We pick the first, for instance site , and we complete all of its connections, . During this covering let’s imagine that we touch one site with and a virtual node with . This covering drops down the antidegrees of of one unit each, and merge together the nodes in a new structure with . Let’s now imagine that when we cover site , we touch two virtual nodes , with , and , with . Again, this covering drops down the antidegrees, but merge together all nodes creating a new structure with .structures should be understood as objects that allow breaking the symmetry of the graph. In other words, the creation of those virtual nodes transforms a random 3regular graph into a sparse random graph, on which it is simpler identifying nodes to put into the vertex cover set , or into the independent set .
Once the set is empty, we pick a site with the largest antidegree , i.e. a ComPaCt structure where the sum of the connections not yet completed is maximal. We, after having completed all the connections of , apply the operation on it. The operation deletes the virtual site from the set , and deletes all sites from , puts all sites labelled in into , and puts all sites labelled in into .
However, if virtual nodes with antidegree exist in , those sites must be completed first. If there are virtual sites with , they have the highest priority. We apply the operation on all of them.
swaps all sites in into and all sites into (see Fig. 2). Once has acted upon , the operation can be applied. The allows to increase the cardinality of the independent set for each virtual site that has .
If no virtual sites with are present, then we look for those that have . Once again, we apply the operation on the first of them. Then, is applied, and we complete the connections of the last site labeled , if needed. If sites have been created, we give priority on those sites, otherwise we proceed forward.
If no virtual sites with and are present, then we look for those that have . As for the case we apply the operation on the first site that we meet and we erase it from , and we put it into . Because the set is not empty anymore, we complete the connections of those nodes by following the priority law of choosing the site (or virtual site) that has the minimum antidegree . The algorithm proceeds till , returning a maximal .
This algorithm differs respect to the one in [16] from the fact that once a site is chosen and is applied on it, if the set is not empty anymore, the algorithm always imposes the highest priority on vertices in . In contrast, the algorithm in [16], chooses and deletes (deletes for [16] means applying our ) at each step one virtual node with , because virtual nodes with antidegree bigger than have the highest priority in be choosing respect to vertices in . This small difference in imposing priority on vertices to choose does not affect the performance of the algorithm, but allows its generalization for any , as we will show in the next section.
As previously said in the introduction, this article aims to verify and to build up an algorithm that can run on real random regular graphs. To achieve this goal, we coded the algorithm, and we tested it on a class of different random regular graphs for different values of (The code can be downloaded at [20]).
We are interested in reaching numerical results that agree with theoretical ones, at least, up to digit. For this reason, we performed an accurate analysis on random regular graphs starting from those that have an order of , and pushing it up to . Because the algorithm runs in linear time, we perform an average over different samples for reducing the standard deviation on the last digit and thus having a better comparison with theoretical lower bounds. Moreover, by observing an increasing behavior in the averaging value of the independence ratio , as described in Fig. 4, we opt for a finitesize analysis in order to extrapolate the asymptotic value of the independence ratio . For doing so, we choose the function (blue line in Fig. 4). The analysis, performed on data reported in Tab. , shows that the independent set ratio reaches the asymptotic value . This value agrees with the theoretical value proposed in [16], confirming that a finitesize analysis must be performed in order to reach the asymptotic independent set ratio.
3 The local algorithm for
In this section, we present a new prioritized algorithm. It, like the one previously described in Sec. 2, builds the random regular graph, and, at the same time, tries to maximize the independent set cardinality . The main idea that we propose is to melt down two existing algorithms, namely the one in [11] and the one above described, into a new prioritized algorithm, which is able to maximize the independent set cardinality, improving existing lower bounds. This idea has been developed as a generalization of the above algorithm for each value of . The new lower bounds come from an accurate finitesize analysis on random regular graphs of size up to .
As before we define four sets, namely . The first two identify the independent set and the vertex cover set that we want to maximize and minimize, respectively. In each of them, sites are labelled with letter and , respectively. identify, instead, the set of sites (or virtual sites) that have antidegree and the set of virtual sites that have antidegree , respectively. All the operations defined in Sec. 2 are also valid. Moreover, we define a new operation . This operation sets into , labels it with the letter , removes from , completes its connections, i.e. and , sets all its neighbors into the set , labels them with the letter , completes all their connections, and erases all of them from .
The algorithm starts selecting a site randomly from the set of all nodes , i.e. , and applies on it (see Fig. 5). This operation will create nodes with different degrees and antidegrees. The algorithm proceeds in choosing the node , from those created before and not labelled, which has the minimum . If the node has , the operation is applied on it. If, instead, the node has , it is inserted into , and labelled as . Once is not empty, the sites in have the highest priority in being covered. Until is empty, the algorithm builds ComPaCt structures, which are set into . We recall that when a site labelled , after being covered, has then it is removed from . Once the is empty, the highest priority in being covered is placed on the virtual sites contained in , i.e. , as follows:

, the algorithm applies operation on all of them, and then operation (in the case , the algorithm before completes their connections, and then applies one by one, by giving the highest priority in being covered to sites , if created). We recall that operation swaps all sites in into sites and all sites in into (see Fig. 2), while deletes the virtual site from the set , and deletes all sites that are in the virtual site from , puts all sites labelled in into , and puts all sites labelled in , into .

If the site to choose with the highest priority is the one with minimum . Then the algorithm applies operation on , erases it from , and puts it into . At this point, the priority in choosing the new site to be covered is placed on set , with the rule of choosing an element with the minimum .

If the rule of choosing a site where the operation will be applied, is the one of choosing with the maximum .

In the case the algorithm takes a site with minimum , and on it is applied .
The algorithm works until the following condition is true: . Then, it checks that all sites in are covered by sites labelled . The results obtained by the algorithm at different values of , and for different graph orders , are presented in Tab. , , and . The asymptotic values of the independent set ratios, obtained by finitesize analysis, are presented in Tab. .
From our analysis, we observe that numerical results, as far as we know, overcome the best theoretical lower bounds given by greedy algorithms. Those improvements are obtained because we are breaking the symmetry of the graph by allowing a transformation of the random regular graph into a sparse random graph, during the graph building process. This transformation is due by the appearance of nodes with , which allows the creation of ComPaCt structures with different antidegrees. Those structures allow to group together nodes of the graph with the aim of labelling at a different instant with respect to their creation. However, this improvement decreases as becomes large, , and disappears when (see Fig. 5). This means that our algorithm for will become the algorithm described in [11] and will reach the same asymptotic values. However, for any small and fixed, we have that the two algorithms are distinct, and our algorithm produces better results without increasing the computational complexity.
4 Conclusion
In this manuscript, we have presented a new local prioritized algorithm for finding a large independent set in a random regular graph at fixed connectivity. we obtained new lower bounds for this problem. All the new bounds improve upon the best previous bounds. All of them have been obtained by a finitesize analysis on samples of random regular graphs of sizes up . For random regular graphs, our algorithm is able to reach, when , the asymptotic value presented in [16]. For regular graphs, instead, we are not able to improve existing lower bound. This discrepancy could be described by the fact that our algorithm is general and is not implemented only for a single value of with ad hoc strategies.
The improvements upon the best bounds are due to a breaking of the symmetry of the graph, obtained by sites and structures, during the graph building process. The creation of structures allows to group together nodes of the graph with the aim of labelling at a different instant with respect to their creation. Those blobs of nodes transform the random regular graphs into a sparse graph, where the searching of a large independent set is simpler.
We believe that a a mathematical analysis of the performance of this algorithm should be considered for giving a rigorous result to the empirical evidence found in this work.
Acknowledgments.
R. M. would like to thank Nicolas Macris for a first reading of the manuscript, and Endre Csóka for useful discussions. S. K. is supported by the Federman Cyber Security Center at the Hebrew University of Jerusalem. R.M. started this project when he was supported by the Federman Cyber Security Center at the Hebrew University of Jerusalem, and finished it by the support of Swiss National Foundation grant No. 200021E 17554.
References
 [1] Mingyu Xiao and Hiroshi Nagamochi. Exact algorithms for maximum independent set. Information and Computation, 255:126–146, 2017.

[2]
Immanuel M Bomze, Marco Budinich, Panos M Pardalos, and Marcello Pelillo.
The maximum clique problem.
In
Handbook of combinatorial optimization
, pages 1–74. Springer, 1999.  [3] Richard M Karp. Reducibility among combinatorial problems. In Complexity of computer computations, pages 85–103. Springer, 1972.
 [4] Raffaele Marino and Scott Kirkpatrick. Revisiting the challenges of maxclique. arXiv preprint arXiv:1807.09091, 2018.
 [5] Amin CojaOghlan and Charilaos Efthymiou. On independent sets in random graphs. Random Structures & Algorithms, 47(3):436–486, 2015.
 [6] David Gamarnik and Madhu Sudan. Limits of local algorithms over sparse random graphs. In Proceedings of the 5th conference on Innovations in theoretical computer science, pages 369–376, 2014.
 [7] Mustazee Rahman, Balint Virag, et al. Local algorithms for independent sets are halfoptimal. The Annals of Probability, 45(3):1543–1577, 2017.
 [8] Béla Bollobás. The independence ratio of regular graphs. Proceedings of the American Mathematical Society, pages 433–436, 1981.
 [9] BD McKay. lndependent sets in regular graphs of high girth. Ars Combinatoria, 23:179–185, 1987.
 [10] József Balogh, Alexandr Kostochka, and Xujun Liu. Cubic graphs with small independence ratio. arXiv preprint arXiv:1708.03996, 2017.
 [11] Nicholas C Wormald et al. Differential equations for random processes and random graphs. The annals of applied probability, 5(4):1217–1235, 1995.
 [12] James B Shearer. A note on the independence number of trianglefree graphs. Discrete Mathematics, 46(1):83–87, 1983.
 [13] Nicholas C Wormald. Analysis of greedy algorithms on graphs with bounded degrees. Discrete Mathematics, 273(13):235–260, 2003.
 [14] William Duckworth and Michele Zito. Large independent sets in random regular graphs. Theoretical Computer Science, 410(50):5236–5243, 2009.
 [15] Endre Csóka, Balázs Gerencsér, Viktor Harangi, and Bálint Virág. Invariant gaussian processes and independent sets on regular graphs of large girth. Random Structures & Algorithms, 47(2):284–303, 2015.
 [16] Endre Csóka. Independent sets and cuts in largegirth regular graphs. arXiv preprint arXiv:1602.02747, 2016.
 [17] Carlos Hoppen and Nicholas Wormald. Properties of regular graphs with large girth via local algorithms. Journal of Combinatorial Theory, Series B, 121:367–397, 2016.
 [18] Maria Chiara Angelini and Federico RicciTersenghi. Monte carlo algorithms are very effective in finding the largest independent set in sparse random graphs. Physical Review E, 100(1):013302, 2019.
 [19] Carlos Hoppen and Nicholas Wormald. Local algorithms, regular graphs of large girth, and random regular graphs. Combinatorica, 38(3):619–664, 2018.
 [20] https://github.com/raffaelemarino/largeindependentsetonrandomdregulargraphswithsmallandfixedconnectivityd.
Comments
There are no comments yet.