1. Introduction
For , the dimensional WeisfeilerLeman algorithm (henceforth referred to simply as the WLalgorithm or the WL algorithm) takes as input an undirected graph, colors all tuples of its vertices, and then iteratively refines the color classes based on a generalized notion of “colored neighbors”. We can use this as an isomorphism test, by applying the dimensional WLalgorithm to the disjoint union of graphs and . Assume for simplicity that and are connected graphs. If for some , the set of stable colors of tuples of vertices from is disjoint from the set of stable colors of tuples of vertices from , then we say that the WL algorithm distinguishes and . It is well known that if is the smallest integer such that the WL algorithm distinguishes graphs and , then is the smallest number of variables in firstorder logic with counting that distinguishes and [IL90]. In particular, equivalence corresponds to the WL algorithm. Furthermore, we know that if two graphs are distinguished by the WL algorithm for some , then they are certainly not isomorphic; the converse, while often true [BK80], is not always true [CFI92].
On an vertex graph , the WL algorithm terminates with its output a stable coloring after at most rounds, where the stable coloring corresponds to the coloring at the first round where the color classes remain unchanged. The color of a tuple after iterations of the WL algorithm is exactly the set of properties of that tuple expressible in , i.e., counting logic with variables and quantifierrank :
Fact 1.1 ([Il90]).
For any graph, , all and any two tuples of vertices of , , the following conditions are equivalent:


.
An optimization of the WL algorithm runs in time , where is fixed (not varying with ). This therefore corresponds to the time required to check equivalence [IL90]. The purpose of this note is to describe this algorithm and its analysis.
2. The OneDimensional Algorithm
2.1. Description of the Algorithm
When , the WLalgorithm is simply known as the color refinement algorithm. The input is an undirected, uncolored graph , and the output is the coarsest stable coloring of . We present the algorithm below.
For avoiding clunkiness in explanations, let us state a quick definition.
Definition 2.1.
During any fixed round of Algorithm 1, define the vertices to be the set of all vertices in the color classes currently on the work list ; define the edges to be the set of all edges in with an vertex as an endpoint.
Note that now it makes sense to talk about the neighbors of a vertex during a fixed round of Algorithm 1: this is simply the set of all such that is an vertex.
Remark 2.2.
It is important to understand how the work list is updated during each round of the algorithm. Each color class present in after the th round is either preserved or split during the st round. If a color class is preserved, we do not include it in ; if it is split, we let the largest part retain its previous color, and include all the other parts in . For instance, suppose had color after the th round, and suppose the st iteration splits them into , , and . Being the largest part of the split, the vertices retain their old color , and we add the new colors of and to . In particular, keeps track of the split color classes, and so being nonempty after some round corresponds to at least one color class being (strictly) refined during that round.
Remark 2.3.
There are two sorting steps in Algorithm 1, in lines 12 and 14. They have different roles. The sort in line 12 is indexed by the vertices , and so for each it clumps together the tuples for all of ’s neighbors . This sort therefore labels a vertex with its number of neighbors of each color. Combined with the old color of , this determines the new color of .
The sort in line 14 is indexed by the old color classes, and so for each old color class it clumps together all neighbors that used to be in this old class. This now enables us to count the sizes of the new color classes, in order to determine which (if any) color classes have been split, so that we may update . Note that radix sort of a sequence of strings over the alphabet takes time , where is the total length of the strings; see Theorem 3.2 of [AHU74]. Let be the number of vertices during a given round. Since there are fewer than edges from any vertex, the strings being sorted and processed in lines 1215 have total length . As we will see, it thus follows that the whole round including the two sorting steps takes time at most .
2.2. Proof of Correctness and Runtime
Let us see now why this optimized version of the dimensional WLalgorithm is correct and efficient. We prove a running time of , which is sufficient for our purposes. A different implementation with the tighter bound (where is the number of edges) appears in [BBG15].
Claim 2.4.
Algorithm 1 terminates with a stable coloring of .
Proof.
The work list keeps track of all color classes refined during the previous iteration. Every round where the algorithm does not terminate, therefore, corresponds to a strict refinement of some color class. Therefore, the process does indeed terminate eventually.
The condition for termination is that the work list is empty. But observe that can only be empty if in the current round no color class splits. Thus, the output is the desired stable coloring. ∎
Claim 2.5.
The color class of any vertex can appear in at most times.
Proof.
Suppose we are at the end of the th round of the algorithm. For any vertex , its color class will appear in during the st round only if ’s color class was just split and is not in the largest piece of the ensuing partition. Thus, each time ’s color class appears on , this class is at most half the size it was during the previous round. ∎
Claim 2.6.
On input with , Algorithm 1 runs in time .
Proof.
We show that round of the main whileloop (lines 316) can be implemented to run in time , where is the number of vertices in round . The th round starts by cycling through all vertices , and scanning their adjacency lists to update the multiset with their neighbors . The size of , therefore, is at most . The total length of the strings sorted in the two radix sorts are at most , so the radix sorts take time at most . The scanning and renaming step in line 13 is also linear in the size of , since all the tuples starting with a particular will appear consecutively after the sort in line 12.
Line 15 describes the process of reassigning the colors. All the neighbors that had been color appear consecutively in this step. We maintain the size, , of color class and a doublylinked list of the elements corresponding to color , . We also maintain an array of pointers, , to ’s entry on its color list, .
If there are rows in starting with , and all of these are identical – except for the rows’ last coordinates, which are the vertices being colored – then color class is unchanged. Otherwise, it is broken into multiple pieces, of sizes, say, . Note that if not all elements of were neighbors, then one of these pieces corresponds to the vertices of color that were not neighbors. If this set of nonneighbors is the largest subpiece, then we do not have to visit its members. We simply update and by decrementing and deleting ’s entry in for each in any of the smaller pieces. Since is a doublylinked list and we maintain the pointer array to ’s entry, this takes time for each such vertex. Thus, the time for processing color class is .
Using Claim 2.5, each vertex appears as an vertex at most times. Since the time it contributes to that round is at most , it follows that the total time for the algorithm is at most . ∎
2.3. Example Run
For instance, consider the algorithm run on the following graph.
Initially the graph is monochromatic, so we can take the initial color to be for all vertices (and so, is initialized to be ). Each vertex has either one or two uncolored neighbors, so in the second iteration, there will only be two new colors, corresponding to the tuples and . Representing them by green and yellow, we obtain the following colored graph after the st iteration (observe that all vertices had to be updated, since each vertex had at least one uncolored neighbor).
Renaming these colors and , observe that now (since the largest part of the partition corresponded to the green vertices), and is in particular nonempty. So we keep going. In the next iteration, we can ignore the vertex on the bottom right, since it is not adjacent to any yellow vertex. Updating the other vertices, note that there are three new colors. Denoting the updated new color classes as
our graph in the next iteration looks as follows.
Now, consider how to update . The old color classes were green and yellow. The yellow vertices have not been refined, so we do not need to include them in . The green vertices have now been partitioned into three parts, colored green, blue and pink. Of these, the largest one remains green, so we can ignore it, and include the two others, so that now , and is still nonempty.
Consider the next update. The only vertices that need to be updated are the yellow ones (for being adjacent to the lone blue vertex), which both keep the same color, and the green ones (for being adjacent to the lone pink vertex), which will both now keep the same color.
Thus, there is no change in this round, so and the algorithm is complete.
3. The Higher Dimensional Algorithm
3.1. Description of the Algorithm
When , the algorithm and its analysis are essentially the same, with a few added subtleties. Once again, we start with any undirected, uncolored graph . We are now concerned with tuples, i.e. members of . Let’s define the neighbor of such a tuple.
Definition 3.1.
Let , , and . Then, let denote the tuple obtained from by replacing by . The tuples and are said to be neighbors for any . We also say is the neighbor of corresponding to .
We define the initial coloring of all tuples to correspond to encodings of their isomorphism types. Precisely speaking, we define to be the (ordered) isomorphism class of ; that is, if and only if the map is an isomorphism. As before, we maintain the work list that stores all the color classes updated during the previous iteration. We now present the complete algorithm below.
Once again, we can define an tuple as a tuple whose color class is in . We can also talk about an neighbor of a tuple, , which is simply an tuple that is a neighbor of for some .
Remark 3.2.
It is worth pointing out the similarity between this algorithm and the onedimensional version, particularly in the two sorting steps, in lines 15 and 17. Once again, the sort in line 15 is indexed by the tuples themselves, and so for each it clumps together the color classes of its neighbors with multiplicity, with the purpose once again being to determine a canonical, welldefined label for the new color classes of the tuples. The sort in line 17 is indexed by the old color classes of the tuples, and so for each old color class it clumps together all tuples that used to be in it. This now enables us to count the sizes of the new color classes to determine which (if any) has been split, in order to update . Again, bounding this time is crucial to the eventual analysis.
3.2. Proof of Correctness and Runtime
The analysis for the higher dimensional version of the algorithm is similar to the onedimensional one.
Claim 3.3.
Algorithm 2 terminates with a stable coloring of .
Proof.
This proof is by and large the same as before. ∎
Claim 3.4.
The color class of any tuple can appear in at most times.
Proof.
This is also similar to the onedimensional case. A tuple will have its color appear in during the st round of the algorithm only if its color class was just split, and was not in the largest piece of the ensuing partition. So each time ’s color class appears on , this class is at most half the size it was during the previous round. There are tuples in all, and so any particular tuple can have its color class treated at most times. ∎
Claim 3.5.
On input with , Algorithm 2 runs in time .
Proof.
Consider the main whileloop (lines 319). The innermost forloop (lines 811) takes time to iterate over and update . The forloop in lines 712 therefore requires time . This is done for each tuple in that round, accounting for the forloop in lines 414. The radix sorts on lines 15 and 17, as well as the scanning and updating steps on line 16, as before, are all linear in the size of . Line 18 is also implemented exactly as before, with the aid of the array of color class sizes and the doublylinked list of elements within each color class, together with the pointers . The updating process is precisely as before, and so the total processing time is still linear in the size of . But note that has one entry for each neighbor of an tuple, and its size, therefore, is also bounded by times the number of tuples. It remains now to verify the number of times a given tuple can appear in , which we know is from Claim 3.4.
There are tuples in total, and each of them appears as an tuple (and therefore gets its color class treated) at most times, with each such treatment taking time, so that the total complexity is , as desired. ∎
References
 [AHU74] Alfred V. Aho, John E. Hopcroft, and Jeffrey D. Ullman, The Design and Analysis of Computer Algorithms, AddisonWesley Longman Publishing Co., Inc., Boston, MA (1974).
 [BK80] Laszlo Babai and Ludik Kučera, “Canonical Labelling of Graphs in Linear Average Time,” 20th IEEE Symp. on Foundations of Computer Science (1980), 3946.
 [BBG15] Christoph Berkholz, Paul Bonsma, and Martin Grohe, “Tight Lower and Upper Bounds for the Complexity of Canonical Colour Refinement,” arXiv:1509.08251v1 [cs] (2015).
 [CFI92] JinYi Cai, Martin Fürer, and Neil Immerman, “An Optimal Lower Bound on the Number of Variables for Graph Identification,” Combinatorica 12 (4) (1992) 389410.
 [IL90] Neil Immerman and Eric S. Lander, “Describing Graphs: A FirstOrder Approach to Graph Canonization,” in Complexity Theory Retrospective, Alan Selman, ed., SpringerVerlag (1990), 5981.
Comments
There are no comments yet.