1 Introduction
According to Google Scholar there are many articles on fuzzy automata, fuzzy description logics (DLs), weighted social networks and fuzzy transition systems. All of these subjects concern structures that are graphbased and fuzzy/weighted. In such structures, both labels of vertices (states, nodes or individuals) and labels of edges (transitions, connections or roles) can be fuzzified.
For structures like labeled transition systems or Kripke models, bisimulation [vBenthem76, vBenthem83, vBenthem84, Park81, HennessyM85] is a natural notion for characterizing indiscernibility between states. For social networks or interpretations in DLs, that notion characterizes indiscernibility between individuals [KurtoninaR99, Piro2012, BSDLINS]. When concerning fuzzy structures instead of crisp ones, there are two kinds of bisimulations: crisp bisimulations and fuzzy bisimulations. While the latter fits to the fuzzy paradigm, the former has also attracted attention due to the application of crisp equivalence relations, for example, in minimizing fuzzy structures.
In [CaoCK11] Cao et al. introduced and studied crisp bisimulations for fuzzy transition systems (FTSs). They provided results on composition operations, subsystems, quotients and homomorphisms of FTSs, which are related to bisimulations. In [DBLP:journals/fss/WuD16] Wu and Deng provided logical characterizations of crisp simulations/bisimulations over FTSs via a HennessyMilner logic. FTSs are already nondeterministic in a certain sense, as they are nondeterministic transition systems when using only the truth values 0 and 1. In [CaoSWC13] Cao et al. introduced a behavioral distance to measure the behavioral similarity of states in a nondeterministic FTS (NFTS). Such NFTSs are of a higher order than FTSs with respect to nondeterminism, because in an NFTS, for each state and action , there may be a number of transitions , where each is a fuzzy set of states. The work [CaoSWC13] studies properties of the introduced behavioral distance, one of which is the connection to crisp bisimulations between NFTSs, which are also introduced in the same article. In [CiricIDB12] M. Ćirić et at. introduced two kinds of fuzzy simulation and four kinds of fuzzy bisimulation for fuzzy automata. Their work studies invariance of languages under fuzzy bisimulations and characterizes fuzzy bisimulations via factor fuzzy automata. Ignjatović et at. [IgnjatovicCS15] introduced and studied fuzzy simulations and bisimulations between fuzzy social networks in a way similar to [CiricIDB12]. In [Fan15] Fan introduced fuzzy bisimulations and crisp bisimulations for some fuzzy modal logics under the Gödel semantics. She provided results on the invariance of formulas under fuzzy/crisp bisimulations and the HennessyMilner property of such bisimulations. In [jBSfDL2] Nguyen et al. defined and studied fuzzy bisimulations and crisp bisimulations for a large class of DLs under the Gödel semantics. Apart from typical topics like invariance (of concepts, TBoxes and ABoxes) and the HennessyMilner property, the other topics studied in [jBSfDL2] are separation of the expressive powers of fuzzy DLs and minimization of fuzzy interpretations. The latter topic was also studied in [minimizationbyfBS]. As shown in [Fan15] and [jBSfDL2], the difference between crisp bisimulation and fuzzy bisimulation with respect to logical characterizations (under the Gödel semantics) relies on that involutive negation or the Baaz projection operator is used for the former but not for the latter.
This work concerns the problem of computing crisp bisimulations for fuzzy structures. The related work is as follows.
In [CiricIJD12] M. Ćirić et at. gave an algorithm for computing the greatest fuzzy simulation/bisimulation (of any kind defined in [CiricIDB12]) between two finite fuzzy automata. They did not provide a detailed complexity analysis. Following [CiricIJD12], Ignjatović et at. [IgnjatovicCS15] gave an algorithm with the complexity for computing the greatest fuzzy bisimulation between two fuzzy social networks, where is the number of nodes in the networks and is the number of different fuzzy values appearing during the computation. Later I. Ćirić et at. [MicicJS18] provided algorithms with the complexity for computing the greatest right/left invariant fuzzy quasiorder/equivalence of a finite fuzzy automaton, where is the number of states of the considered automaton and is the number of different fuzzy values appearing during the computation. These relations are closely related to the fuzzy simulations/bisimulations studied in [CiricIDB12, CiricIJD12]. Note that, when the Gödel semantics is used, the mentioned complexity order can be rewritten to , where is the number of (nonzero) transitions/connections in the considered fuzzy automata/networks. In [TFS2020] we provided an algorithm with the complexity for computing the greatest fuzzy bisimulation between two finite fuzzy interpretations in the fuzzy DL under the Gödel semantics, where is the number of individuals and is the number of nonzero instances of roles in the given fuzzy interpretations. We also adapted that algorithm for computing fuzzy simulations/bisimulations between fuzzy finite automata and obtained algorithms with the same complexity order.
In [DBLP:journals/fss/WuCBD18] Wu et al. studied algorithmic and logical characterizations of crisp bisimulations for NFTSs [CaoSWC13]. The logical characterizations are formulated as the HennessyMilner property with respect to some logics. They gave an algorithm with the complexity for testing crisp bisimulation (i.e., for checking whether two given states are bisimilar), where is the number of states and is the number of transitions in the underlying nondeterministic FTS.
In [StanimirovicSC2019] Stanimirović et at. provided algorithms with the complexity for computing the greatest right/left invariant Boolean (crisp) quasiorder matrix of a weighted automaton over an additively idempotent semiring. Such matrices are closely related to crisp simulations. They also provided algorithms with the complexity for computing the greatest right/left invariant Boolean (crisp) equivalence matrix of a weighted automaton over an additively idempotent semiring. Such matrices are closely related to crisp simulations/bisimulations.
As far as we know, there were no algorithms directly formulated for computing crisp bisimulations for fuzzy structures like FTSs or fuzzy interpretations in DLs. One can use the algorithm given by Wu et al. [DBLP:journals/fss/WuCBD18] for testing crisp bisimulation for a given FTS (as a special case of NFTS), but the complexity is too high (and computing the largest bisimulation is more costly than testing bisimulation). One can also try to adapt the algorithms with the complexity given by Stanimirović et at. [StanimirovicSC2019] to compute the largest crisp bisimulation of a given finite fuzzy automaton.
As the background, also recall that Hopcroft [Hopcroft71] gave an efficient algorithm with the complexity for minimizing states in a deterministic (crisp) finite automaton, and Paige and Tarjan [PaigeT87] gave efficient algorithms with the complexity for computing the coarsest partition of a (crisp) finite graph, for both the settings with stability or sizestability. As mentioned in [PaigeT87], an algorithm with the same complexity order for the second setting was given earlier by Cardon and Crochemore [DBLP:journals/tcs/CardonC82].
Bisimulations can be formulated for fuzzy labeled graphs and then adapted to other fuzzy structures. In this article, applying the ideas of the Hopcroft algorithm [Hopcroft71] and the Paige and Tarjan algorithm [PaigeT87], we develop an efficient algorithm for computing the partition corresponding to the largest crisp bisimulation of a given finite fuzzy labeled graph. Its complexity is of order , where , and are the number of vertices, the number of nonzero edges and the number of different fuzzy degrees of edges of the input graph, respectively. If is bounded by a constant, for example, when is a crisp graph, then this complexity is of order . If then, taking for the worst case, the complexity is of order .
We also study a similar problem for the setting with counting successors, which corresponds to the case with sizestable partitions for graphs [PaigeT87], qualified number restrictions in DLs [jBSfDL2], and graded modalities in modal logics [DBLP:journals/sLogica/Rijke00]. In particular, we provide an efficient algorithm with the complexity for computing the partition corresponding to the largest crisp bisimulation of a given finite fuzzy labeled graph in the setting with counting successors. When , this order can be simplified to .
The rest of this article is structured as follows. In Section 2, we provide preliminaries on fuzzy labeled graphs, partitions and crisp bisimulations for such graphs. In Section 3, we present the skeleton of our algorithm for the main setting (without counting successors) and prove its correctness. In Section 4, we give details on how to implement that algorithm and analyze its complexity. Section LABEL:sec:_settingcountingsuccessors concerns the setting with counting successors. Section LABEL:sec:_conc contains our conclusions.
2 Preliminaries
A fuzzy labeled graph, hereafter called a fuzzy graph for short, is a structure , where is a set of vertices, (respectively, ) is a set of vertex labels (respectively, edge labels), is called the fuzzy set of labeled edges, and is called the labeling function of vertices. It is finite if all the sets , and are finite. Given vertices , a vertex label and an edge label , means the degree of that is a member of the label of , and the degree of that there is an edge from to labeled by .
Recall that a partition of is a set of pairwise disjoint nonempty subsets of whose union is equal to . Given an equivalence relation on , the partition corresponding to is , where is the equivalence class of with respect to (i.e., ).
Let and be partitions of . We say that is a refinement of if, for every , there exists such that . In that case we also say that is coarser than . By this definition, every partition is a refinement of itself. Given a refinement of a partition , a block is compound with respect to if there exists such that .
Given a fuzzy graph , a nonempty binary relation is called a crisp autobisimulation of , or a bisimulation of for short, if the following conditions hold (for all possible values of the free variables):
(1)  
(2)  
(3) 
where and denote the usual crisp logical connectives.
The above definition coincides with the one of [jBSfDL2, Section 4.1] for the case when and .
Proposition 2.1
Let be a fuzzy graph. Then, the following assertions hold.

The relation is a bisimulation of .

If is a bisimulation of , then so is .

If and are bisimulations of , then so is .

If is a nonempty set of bisimulations of , then so is .
The proof of this proposition is straightforward. The following corollary immediately follows from this proposition.
Corollary 2.2
The largest bisimulation of a fuzzy graph exists and is an equivalence relation.
Given a fuzzy graph , by the partition corresponding to the largest bisimulation of we mean the partition of that corresponds to the equivalence relation being the largest bisimulation of .
3 The Skeleton of the Algorithm
In this section, we present an algorithm for computing the partition corresponding to the largest bisimulation of a given finite fuzzy graph. It is formulated on an abstract level, without implementation details. The aim is to facilitate understanding the algorithm and prove its correctness. Other aspects of the algorithm are presented in the next section.
In the following, let be a finite fuzzy graph. We will use , and to denote partitions of , and to denote nonempty subsets of , and to denote an edge label from .
For , we denote . We say that is stable with respect to (and ) if for all , where the suprema are taken in the complete lattice [0,1].
We say that a partition is stable with respect to (and ) if every is stable with respect to . Next, is stable (with respect to ) if it is stable with respect to for all and .
Lemma 3.1
Let be a nonempty family of nonempty subsets of . If a partition is stable with respect to for all , then it is also stable with respect to .
The proof of this lemma is trivial.
By we denote the partition of that corresponds to the equivalence relation
and for all . 
The following lemma provides another look on the considered problem, leading to a constructive computation.
Lemma 3.2
is the partition corresponding to the largest bisimulation of iff it is the coarsest stable refinement of .^{1}^{1}1This lemma can be generalized by allowing to be imagefinite or witnessed. is imagefinite if, for every and , the set is finite. is witnessed if, for every , and , the set has the biggest element.
Proof. It is sufficient to prove the following assertions.

If a partition is a stable refinement of , then its corresponding equivalence relation is a bisimulation of .

If is the partition corresponding to the largest bisimulation of , then it is a stable refinement of .
Consider the first assertion. Let be a stable refinement of and the equivalence relation corresponding to . We need to show that satisfies Conditions (1)–(3). Condition (1) holds since is a refinement of . Consider Condition (2) and assume that and hold for some and . Since is stable, is stable with respect to . Hence, , and therefore, . Since is finite, it follows that there exists such that holds and . This completes the proof for Condition (2). Since is symmetric, Condition (3) is equivalent to Condition (2) and also holds.
Consider the second assertion. Let be the partition corresponding to the largest bisimulation of . Due to Conditions (1)–(3), is a refinement of . It remains to show that is stable. Let , and . We need to show that . Let . If , then clearly . Suppose that . By Condition (2), there exists such that holds and . Thus, and . We have proved that . Analogously, it can be proved that . Hence, , which completes the proof.
In the following, let . By we denote the coarsest partition of such that each of its blocks is stable with respect to both and . Clearly, that partition exists and is computable. How to implement the function is left for later. If contains more than one block, then we say that is split into by (or by with respect to as the context). We also define
Clearly, is the coarsest refinement of that is stable with respect to both and .
Lemma 3.3
Let a stable partition be a refinement of , which in turn is a refinement of . Let and be blocks such that . Then, is a refinement of for any .
Proof. Since is a refinement of and is a refinement of , both and are unions of a number of blocks of . Since is stable, it is stable with respect to for all blocks . By Lemma 3.1, it follows that is stable with respect to both and . Since is the coarsest refinement of that is stable with respect to both and , it follows that is a refinement of .
We provide Algorithm 1 (on page 1) for computing the partition corresponding to the largest bisimulation of . It starts by initializing to . If is a singleton, then is stable and the algorithm returns it as the result. Otherwise, the algorithm repeatedly refines to make it stable as follows. The algorithm maintains a partition of , for each , such that is a refinement of and is stable with respect to for all . If at some stage for all , then is stable and the algorithm terminates with that . The variables are initialized to for all . In each iteration of the main loop, the algorithm chooses , and such that and , then it replaces with and replaces in with and . In this way, the chosen is refined (and may also be refined), so the loop will terminate after a number of iterations. The condition reflects the idea “process the smaller half (or a smaller component)” from Hopcroft’s algorithm [Hopcroft71] and Paige and Tarjan’s algorithm [PaigeT87]. Without using it the algorithm still terminates with a correct result, but the condition is essential for reducing the complexity order of the algorithm.
Example 3.4
Consider the fuzzy graph illustrated below and specified as , where , , , is the empty labeling function (which labels each vertex with the empty fuzzy set), and is specified by the edges and their fuzzy degrees displayed in the picture.
Consider the run of Algorithm 1 on this fuzzy graph.

At the beginning, we have that , and .

During the first iteration of the main loop, and . As the effects of this iteration, is refined to , , , and is changed to .

During the second iteration of the main loop, we have that and . is not further refined, but is changed to .

The algorithm terminates with the result , , .
Some properties of Algorithm 1 are stated below.
Lemma 3.5
Let be the coarsest stable refinement of . Consider Algorithm 1 and suppose that . The following assertions are invariants of the main loop of the algorithm:

is a refinement of and for all ;

is a refinement of ;

is stable with respect to for all and .
By Corollary 2.2 and Lemma 3.2, the coarsest stable refinement of exists. The first and third invariants clearly hold. The second invariant follows from Lemma 3.3 and the first invariant.
Theorem 3.6
Algorithm 1 always terminates with a correct result.
Proof. It is easy to see that, if , then is the partition corresponding to the largest bisimulation of and the assertion of the theorem holds. Assume that .
By the first assertion of Lemma 3.5, it is an invariant of the algorithm that is a refinement of for all . The loop of the algorithm must terminate because the partitions (for ) cannot be refined forever.
Let be the coarsest stable refinement of . By the second assertion of Lemma 3.5, is a refinement of the final .
By the first assertion of Lemma 3.5, the final is a refinement of . At the end of the algorithm, for all . Hence, by the third assertion of Lemma 3.5, the final is stable. Thus, the final is a stable refinement of . Therefore, it is a refinement of . Together with the assertion in the above paragraph, this implies that the final is equal to . By Lemma 3.2, it follows that is the partition corresponding to the largest bisimulation of . This completes the proof.
4 Implementation Details and Complexity Analysis
In this section, we show how to implement Algorithm 1 so that its complexity is of order , where , and are the number of vertices, the number of nonzero edges and the number of different fuzzy degrees of edges of the input graph , respectively. Apart from the mentioned idea “process a smaller component”, another key for getting that complexity order is to efficiently process the operation at the statement 1 of Algorithm 1. Like the Hopcroft algorithm [Hopcroft71] and the Paige and Tarjan algorithm [PaigeT87], for that operation we also start from the vertices from and look backward through the edges coming to them, without scanning . The processing is, however, quite sophisticated. To enable full understanding of the implementation and its complexity analysis, we use the objectoriented approach and decide to describe the data structures in detail.
4.1 Data Structures
Algorithm 1 was formulated on an abstract level. Using the objectoriented approach, we describe how to get an efficient implementation of this algorithm by using a number of classes. In the description given below, we refer to the input graph and the variables and () used in the algorithm, which represent partitions of . The classes are listed below:

: the type for the vertices of ;

: the type for the edges of ;

: the type for the blocks of ;

: the type for the blocks of ();

, , , : the types for doubly linked lists of elements of type , , or , respectively;

: the type for , defined as ;

: the type for ();

: the type for objects specifying information about edges connecting a vertex to a block of .
We call objects of the type , or superblocks, superpartitions and blockedges, respectively. We give below details for nontrivial classes in the above list. As in the Java language, attributes of objects are primitive values or references.
Vertex. This class has the following object attributes.

is the ID of the vertex (a natural number or a string).

is the block of that contains the vertex.

is the list of edges coming to the vertex.

is a flag for internal processing.
The constructor sets to , to , to a newly created empty list, and to . The class also has a static method that returns the vertex with the given ID. It uses a class attribute to store the collection of the vertices that have been created.
Edge. This class has the following object attributes.




is the value of , where , and are the , and of the edge, respectively.

specifies information about the set of edges labeled by from to the vertices of , where and are the and of the edge, respectively, and is the block of that contains the of the edge (via a block of ).
The constructor sets the above listed attributes to the parameters, respectively, and then adds the current edge to the list .
BlockEdge. As mentioned above, an object of this class gives information about the set of edges with a label from a vertex to a block . It is defined as an extended map of type , whose keys are the values of for . The value of a key in the map is the number of vertices such that . Apart from the map, the class has two object attributes of type , with names and descriptions given below.

: When the block of is going to be replaced by and , the current blockedge (i.e., the object this) changes to a blockedge with the destination , a new blockedge with the destination is created, and the attribute of the current blockedge is set to that new blockedge.

: This attribute is a converse of . That is, the current blockedge is equal to the attribute of the object if they are set.
Apart from the get/set methods for the above attributes, the class also has the following methods.

: This method increases the value of the key in the map by 1. If the key is absent, it is added to the map and its value is set to 1.

: This method decreases the value of the key in the map by 1, under the assumption that the key is present. If the value becomes 0, then the key is deleted from the map.

: This method returns the biggest key of the map if the map is not empty, and 0 otherwise.
The default constructor creates an empty map and sets the additional attributes to . The constructor differs from the default in that it also sets to .
Block. Objects of this class are the blocks of (the current partition of used in the algorithm). The class has the following object attributes.

is the list of vertices of the block.

is a reference to .

is a map that associates each with the block of that contains the current block.

and are maps whose keys’ values specify the splitting to be done for the current block. They are described in more detail below.
Let denote the of the current block. Consider the statement 1 of Algorithm 1 and let . For each , let and for any (the choice of does not matter, since is stable with respect to both and ). As an invariant of the algorithm, is stable with respect to , and hence does not depend on the choice of from . The maps and are set so that, if , then is a list representing the set , else is a list representing the set . The computation of these maps will be specified later.
The class has the method defined as usual. Let it also have the following constructor.
Let the class have the following static method, whose parameter is a key’s value of one of the maps and of a block of .
The statement 4.1 in the above pseudocode means that, if is the last portion of the block , then we do not create a new block (but just use as the block for ).
SuperBlock. Each object of this class represents a block of for some ( is used in the algorithm as a partition of ). It consists of a number of blocks of . The class has the following members.

is a list of the blocks of that compose the current superblock.

is a reference to .

is the constructor that initializes to a newly created empty list, sets to and adds the current superblock to by calling .

is the method that returns (the number of blocks of that compose the current superblock).

is the method that returns the truth of . That is, this Boolean method returns iff the current superblock is compound with respect to .

is a method that can be called only when the current superblock is compound. It compares the first two blocks of the current superblock and returns the smaller one (or any one when their sizes are equal).

is the method that adds the block to the list . If the addition causes that , then the method also moves the current superblock from to .

is the method that removes the block from the list . If the removal causes that , then the method also moves the current superblock from to .

is a static method that creates a new superblock for the superpartition , adds the block to and makes the superblock of in . It consists of the statements , , and .
SuperPartition. An object of this class represents for some ( is used in the algorithm as a partition of ). It consists of a number of superblocks (i.e., objects of type ). The class has the following object attributes.

is a list consisting of all the compound superblocks (each of which consists of more than one block).

is a list consisting of all the simple superblocks (each of which contains at most one block).
The constructor initializes the above mentioned attributes to newly created empty lists. The class has the method , which adds the superblock to the list or depending on whether is compound or not.
4.2 Initialization
Our revision of Algorithm 1 uses Procedure (on page LABEL:procInitialize), which sets up the global variables and , where is a map of type and means .
algocf[htbp]
Let’s analyze the complexity of Procedure . Recall that the sizes of and are assumed to be bounded by a constant. The time needed for running the steps is as follows: LABEL:step:_HGJWA_1: ; LABEL:step:_HGJWA_2: ; LABEL:step:_HGJWA_3–LABEL:step:_HGJWA_11: ; LABEL:step:_HGJWA_12–LABEL:step:_HGJWA_13: ; LABEL:step:_HGJWA_14–LABEL:step:_HGJWA_16: ; LABEL:step:_HGJWA_17: ; and LABEL:step:_HGJWA_18: . Thus, the time complexity of the procedure is of order .
4.3 The Revised Algorithm
We revise Algorithm 1 to obtain Algorithm 2 (on page 2), which uses the classes specified in Section 4.1 and the procedure given in Section 4.2. The new algorithm uses global variables and for the subroutines, where is a map of type . and (for ) correspond to the variables and used in Algorithm 1, respectively. The call of in the statement 2 of Algorithm 2 is aimed to get the effects of the statements 1 and 1 of Algorithm 1. The definition of this procedure, given under Algorithm 2, calls four subroutines in subsequent steps, which are discussed and defined below.
Comments
There are no comments yet.