DeepAI
Log In Sign Up

Experimental Evaluation of a Method to Simplify Expressions

03/13/2020
by   Baudouin Le Charlier, et al.
0

We present a method to simplify expressions in the context of an equational theory. The basic ideas and concepts of the method have been presented previously elsewhere but here we tackle the difficult task of making it efficient in practice, in spite of its great generality. We first recall the notion of a collection of structures, which allows us to manipulate very large (possibly infinite) sets of terms as a whole, i.e., without enumerating their elements. Then we use this tool to construct algorithms to simplify expressions. We give various reasons why it is difficult to make these algorithms precise and efficient. We then propose a number of approches to solve the raised issues. Finally, and importantly, we provide a detailed experimental evaluation of the method and a comparison of several variants of it. Although the method is completely generic, we use (arbitrary, not only two-level) boolean expressions as the application field for these experiments because impressive simplifications can be obtained in spite of the hardness of the problem.

READ FULL TEXT VIEW PDF

page 1

page 2

page 3

page 4

09/13/2022

Efficient Deobfuscation of Linear Mixed Boolean-Arithmetic Expressions

Mixed Boolean-Arithmetic (MBA) expressions are frequently used for obfus...
01/28/2020

Normalizing Casts and Coercions

This system description introduces norm_cast, a toolbox of tactics for t...
08/11/2022

SSLEM: A Simplifier for MBA Expressions based on Semi-linear MBA Expressions and Program Synthesis

MBA (mixed boolean and arithmetic) expressions are hard to simplify, so ...
06/04/2019

KarNet: An Efficient Boolean Function Simplifier

Many approaches such as Quine-McCluskey algorithm, Karnaugh map solving,...
11/23/2021

Caviar: An E-graph Based TRS for Automatic Code Optimization

Term Rewriting Systems (TRSs) are used in compilers to simplify and prov...
01/28/2020

Simplifying Casts and Coercions

This paper introduces norm_cast, a toolbox of tactics for the Lean proof...
06/18/2020

Extraction and Evaluation of Formulaic Expressions Used in Scholarly Papers

Formulaic expressions, such as 'in this paper we propose', are helpful f...

1 Introduction

Many people (engineers, logicians, mathematicians, students and experienced practitioners) are faced, in their everyday practice, with the task of simplifying expressions. This is useful or even necessary to understand the meaning of a formula resulting from a long computational effort, or to optimize the implementation of a compiler or the design of a logic circuit, for example. In this paper, we propose a new and automatable method to perform such simplifications based on a finite set of equations (or axioms) formalizing the meaning of the expressions to be simplified. Since the method is general, namely parameterized by the chosen set of equations, it is applicable to many mathematical and logical domains.111Because of that, in this paper, we consider the words term, expression, and formula as synonyms: We prefer not to choose a single name and stick to it, because we want to be free to use the most accepted word in any application context. The novelty and power of the presented method stem from the use of a powerful data structure, called collection of structures, introduced in [5, 6] and thorougly studied in [1]. Collections of structures allow us to represent very large sets of equivalent terms compactly. Axioms can be used to add new equivalent terms to the collection of structures without enumerating the terms. Simplification takes place when new, simpler terms appear in the collection of structures. Early attempts to using collections of structures to build simplification algorithms have been proposed in [1, 5]. The contributions of this paper are the following: (a) We give precise guidelines for constructing simplification algorithms that are more accurately defined and much more efficient than our previous proposals, (b) we explain in detail how and why simplification takes place, (c) we provide a thorough experimental evaluation of a large number of variants of a reference algorithm, using boolean expressions as our application field.

2 Previous Work: Collections of Structures

In this preliminary section, we summarize the main definitions, properties, and results about collections of structures. More explicit information and examples can be found in [1, 6].

We assume given an (implicit) set of all terms. Conceptually, we only use binary terms, of the form , and a unique, ‘dummy’, constant null. Regular constants such as and non binary terms such as can be simulated by binary terms and null as done e.g., in [9] and [20]. In practice however, we stick to standard notation for writing examples. Let be a set of terms. We say that is sub-term complete if every sub-term of a term of belongs to . Assuming that is sub-term complete, we say that a relation , over , is a congruence if it is an equivalence relation such that

Note that is not a relation over all terms, in general, so that our definition is not standard. To represent sets of terms, we use structures. A structure is of the form where is a function symbol, and , , and are identifiers of sets of structures, or simply identifiers, for short. We use natural numbers as identifiers and we choose a special identifier to represent the term null. We call , the key of the structure. A collection of structures is a finite set of structures. Let be a collection of structures. Let be the set of identifiers used by the structures of . We write to denote the set of structures of that are of the form . Therefore, we have . By definition, denotes the (unique) family of sets of terms such that a term belongs to whenever contains the structure and , belong to and , respectively. We only use well-formed collections of structures (see [6]), which fulfill a simple condition that ensures that no set is empty. A well-formed collection of structures is normalized if it does not contain two different structures and (with the same key but different set identifiers). It can be proved [6] that is normalized if and only if is a partition of and (). In that case, the sets are the equivalence classes of a congruence , as defined above, and we say that is the abstract denotation of . In the following, unless stated, we assume that is normalized.

There are four main operations to handle structures: toSet, substitute, normalize and unify. The operation toSet adds a term to a collection of structures . It returns the identifier of the set of structures to which the term belongs. (More exactly, the term belongs to the set denoted by . For simplicity, we often use this shortcut.) It first recursively computes the identifiers and corresponding to and . Then, if no structure already exists for some , such a structure is created with a novel identifier . In any case, the identifier (of the new or old structure ) is returned. We also use the following notation: Assume that a term is represented in a collection of structures. Then, we use to denote the identifier returned by toSet applied to . The value of is not defined, otherwise. The operation substitute takes as input two identifiers and and a collection of structures that uses and . It is not assumed that is normalized. It removes from every structure that involves (i.e., a structure of one of the three forms or or , for some ) and it adds to every structure obtained by replacing by in the previous one, if it is not already in it. The operation normalize normalizes a collection of structures by repeatedly applying the operation substitute until the collection is normalized. Assuming that it is not, it contains at least two structures and , with the same key; the operation substitute is thus applied to such , . Then, the operation normalize is applied to the modified collection. For a precise semantic characterization of the effect of substitute and normalize, see [1, 6]. The last operation, called unify, takes as input two identifiers and , and a collection of structures . It first applies the operation substitute to , and ; afterwards, it applies the operation normalize to the resulting collection of structures. Semantically, an equivalence constraint is added beween the terms of and , and the logical consequences of this constraint are propagated in the whole collection. More precisely, let be the abstract denotation of before applying the operation. After applying it, the collection denotes the smallest congruence such that and (for some (and, in fact, any) and ). See [1] for a detailed proof.

The usefulness of collections of structures is greatly due to the fact that they are efficiently implementable (see [1, 6]). The implementation notably maintains a list idList of all set identifiers and three sets of lists, namely , , and , allowing us to go through all structures for , , or fixed, respectively. Elements can be added or removed to/from those lists in constant time. The implementation also includes an incremental algorithm that maintains the size of minimal terms in each (denoted by ).

Previous applications of collections of structures have been described in [1, 5, 6]. For instance, it is shown in [6] that they can be used to elegantly and efficiently solve the word problem [27] in a theory defined by a finite set of ground equations (see [9, 15, 19, 24]). This amounts to solving all equations in turn, by applying operation toSet to and , and applying operation unify to the returned identifiers and . Then, to check whether two terms are equal, we only have to apply the operation toSet to them and to check whether the returned identifiers are equal. The solution to another, more difficult, problem has been first described in [5], and proven correct in [1]: Collections of structures can be used to compute the (representation of a) congruence defined by a finite set of non ground equations (also called axioms) and a finite set of constants, conditionally to the fact that the number of equivalence classes of the congruence is finite (and also, in practice, not too big). The algorithm that solves this problem uses valuations, which are functions from a finite set of variables to the set of all identifiers. Note that we use the letters , , for variables. Other letters such as , , … are used for constants. We generalize the operation toSet to non ground terms with an additional argument, namely a valuation whose domain contains the variables of the term. The operation works recursively as before except for variables, in which case the value of the valuation for the variable is returned. Similarly, we extend the solving of equations as follows: We apply the operation toSet to the left and right sides of the axiom and to some valuation, which returns two identifiers and . Then, we apply the operation unify to and . In the following, we say that we apply the valuation to the axiom. The effect is strictly equivalent to choosing two terms and , represented by and , and solving the ground equation . The algorithm that computes the congruence starts from a collection of structures representing only the constants. Then it fairly generates all valuations that use the variables in the equations and the identifiers of the collection of structures, and applies them to relevant axioms. New identifiers introduced by axiom applications are used in turn to generate new valuations. The algorithm stops when no new valuation can be generated. We call this algorithm the bottom-up algorithm. The work presented in the rest of this paper uses the same concepts and operations.

3 Related Work

Collections of structures are strongly related to algorithms to compute the congruence closure of a relation over terms (see, e.g., [2, 3, 9, 11, 19, 20, 24]). A main difference is that these previous methods actually work on terms, not on structures as we do, and they use a Union-Find data structures [10, 13] to record equivalence between terms. Terms often are implemented by DAGs [3]. Collections of structures, in some sense, are a generalization of DAGs: When a collection of structures represents a single term, it is represented as a DAG. But, in general, an identifier represents several, possibly infinitely many, terms. In fact, most congruence closure methods use so-called signatures to help detecting equivalence of terms. Our structures are in fact equivalent to signatures: We work with the signatures only and get rid of individual terms. See [1, 6] for a much more complete comparison.

Computing a congruence closure is often used to build a set of (ground) rewrite rules that can be used to simplify terms in theorem provers [11, 18, 20, 25]. Collections of structures can also be viewed as a confluent set of rewrite rules (when the collection is normalized). Our identifiers are similar to the new constants used in [11, 20] and our operation toSet simply applies the rewrite rules to a term. However, our “set of rewrite rules” is not exactly the same as in [11, 20] because we do not use the trick of considering identifiers as new constants.

Collections of structures can also be viewed as a simple form of tree automata [7]. The main difference lies in the operations defined on them. The implementation of collections of structures is also specific.

Our approach to expression simplification can be viewed as an instance of term indexing [22, 23]: in our case, the index is the collection of structures and the set of indexed terms is its denotation . The relation between an indexed term and a query is that is a simplification of . This relation is very different in nature from relations such as generalization or instantiation that are usually considered in classical term indexing, but some methods from this area also apply to equational theories such as theories (see [23]).

In the rest of this paper, we illustrate our method by applying it to the simplification of boolean expressions. A lot of work has been done in that area (see e.g., [8, 12, 17, 21]) but it mainly concentrates on two-level formulas for logic circuit design. BDDs [4] can also be used to represent large boolean formulas compactly, but such a representation is not simple in our sense, i.e., clear and readable. Our goal is not to compete with such methods. We use the boolean calculus to explain our method mainly because it allows us to do it very clearly and nicely. Simplifying boolean formulas also is related to SAT solving [14]. Again, our goal is different since SAT solving is a decision problem. However, it could be possibly interesting to simplify input formulas of SAT solvers before they are transformed to equisatisfiable formulas [3]. Moreover, application of our method to the simplification of boolean expressions naturally induces effects similar to SAT solving techniques, such as unit clause elimination.

4 A Method to Simplify Expressions

4.1 Principle of the method

The starting idea for our method is as follows: Given an expression to simplify, we could apply all axioms to all its sub-expressions (including itself) in order to build a (possibly large) set of expressions that are equivalent to the original expression. Then, we could either choose a minimal expression within that set, or iterate the process of applying axioms to the newly created sub-expressions. Moreover, having chosen a minimal expression, we can restart the whole process with this expression. And we could continue this way as long as we want, and stop when the current minimal expression would be found satisfactory.

However, in general, the above suggested method is not actually applicable for time and space reasons: Generating all equivalent sub-expressions and keeping them in memory is too inefficient. This is precisely the reason why the collection of structures notion was proposed in [1, 5, 6]: Collections of structures allow us to manipulate sets of equivalent terms globally without enumerating them. For boolean expressions, using suitable axioms, the bottom-up algorithm explained at the end of Section 2 computes a representation of all possible expressions (with constants) and, in a sense, it solves all instances of the simplification problem, since any expression can be instantly simplified by applying the toSet operation to it (in the context of the collection of structures representing the congruence). Unfortunately, the number of structures needed for the collection is proportional to where is the number of constants. Thus, the method works only for very small values of . For simplifying expressions using more constants, our idea is to adapt the bottom-up algorithm by focusing on the expression to be simplified. We limit the generation of valuations and the application of axioms to cases where they are relevant for the task of simplifying this particular expression.

Remember that, in the case of the bottom-up algorithm, we apply a valuation to an axiom as follows: We apply the operation toSet to the left and right sides of the axiom and to some valuation, which returns two identifiers and . Then, we apply the operation unify to and . To focus on a particular expression, we apply the operation unify

only if at least one of the two identifiers is in the collection of structures beforehand. This ensures that the collection of structures only represents expressions equivalent to the expression to be simplified (and, of course, sub-expressions of these expressions). For valuations, we would ideally only generate useful valuations that add terms or equality constraints to the collection of structures, when applied to some axioms. We have first attempted to compute such valuations by implementing a form of pattern matching between the terms in the axioms and the structures of the collection. It happens that this method is impracticable (i.e, both complicated and inefficient) because identifiers and structures of a collection potentially represent many terms, not a single one, and because there can be a lot of redundancy induced by the matching operation: many returned valuations can be the same. Fortunately, we have found a way to generate interesting valuations in a much simpler way: We consider all identifiers

of the collection, in turn, in order of appearance in the collection. We execute all axioms involving a single variable with respect to the valuation . We run through all structures (for this particular ) and we execute all axioms involving exactly two variables , , with respect to the valuation . For the axioms involving three variables , , , we then consider all structures of the form where and execute the axioms with respect to the valuation where (and ) or (and ). Note that we assume that axioms use at most three variables. More elaborated valuations should be used for more complex axioms (see Section 4.3).

The reader may wonder in what sense generating valuations and applying them to axioms, as explained above, can actually simplify an expression. In fact, it just effectuates, on collections of structures, the kind of treatment we suggested, on sets of terms, at the beginning of this section. We can now make it clearer with an example.

Figure 1: Axioms for Simplifying Boolean Expressions
Example 1

Let us apply our method to simplify the boolean expression a + b + !b + a using the axioms of Figure 1. Note that we write !b instead of , as in Figure 1. In order to very concretely show what happens, we use the actual numerical values of the identifiers. So, after creating the representation of a + b + !b + a in the collection of structures (i.e., after applying the operation toSet to it), we have the following values for the identifiers:

In other words, the following structures exist:

Now, we apply the method: we go through the list of identifiers, compute the valuations related to each identifier, and apply the valuations to the axioms. Consider the moment when identifier

is taken into account. The valuation is generated because the structures and exist. Thus, the operation toSet is applied to the term , which returns the identifier , thanks to the very same structures. So the operation toSet is also applied to the term . We first get: , because the axiom has been applied to the valuation , previously; so, the structure exists. Then, we get , because the structure exists, due to a previous application of the valuation to the axiom . To conclude the axiom application, identifiers and are unified: First, is replaced by in the collection of structures, changing the structures and into and . But, at this point, the collection of structures also contains the structure , due to a previous application of the axiom . Therefore, the operation normalize unifies the identifiers and , replacing by . Since was the identifier of expression a + b + !b + a, this identifier is now set to . In other words, the expression a + b + !b + a has been simplified to 1. And this has been achieved by a single axiom application (which takes advantage of a lot of work done before, however).

Observe that the axiom application has used the constant (in fact, its identifier) three times, despite that the identifier of is not used by the valuation val. Moreover, the result of unifying and is immediately propagated in the collection of structures, so that no additional computation remains needed. We say that this axiom application uses .

More generally, we say that an axiom application uses or if one or both of their identifiers appears at some point of the computation of this axiom application or in the valuation applied to the axiom. This kind of axiom application plays a major role in the simplification process (see Section 5).222Another example, where is used by the valuation, is given in the optional appendix.

 Create an empty collection of structures; set mainId to toSet(expr);
 set previousSize to size(mainId); set count to ;


while count maxCount and size(mainId) expectedSize

 set reCount to ;
while reCount maxReCount and the collection of structures is not full
reset idList; set timeLimit to current time; set subCount to ;
while idList is not fully traveled and the collection of structures is not full and subCount maxSubCount and size(mainId) expectedSize

 let be the next identifier in idList;
 compute the valuations related to ;
while the set of valuations is not empty
pick a valuation val in the set;
apply val to all axioms that have the same arity as val;

if time() timeLimit
add 1 to subCount;
set timeLimit to current time;
 add 1 to reCount;


 call garbage collector;

if size(mainId) previousSize
add to count;

if size(mainId) previousSize
reset count to 0;
set previousSize to size(mainId);
Figure 2: The algorithm in principle

We are now almost in position to present a first simple algorithm that implements our method. The only remaining issues are to decide what to do when the list of identifiers initially present in the collection of structures is completely visited, and to decide what to do when the memory assigned for the collection of structures is full. For the first issue, we may choose to continue by visiting the identifiers newly created up to now, or by restarting an iteration from the beginning. The first strategy, in some sense, amounts to performing a depth-first search in the set of newly created terms, while the second strategy resembles a breadth-first search. No strategy is a priori ideal. Therefore, the algorithm uses two counters reCount and subCount to allow a compromise between the two: the first strategy is used until subCount reaches a maximum value maxSubCount; then, the computation is restarted at the beginning of the identifier list (idList) at most maxReCount times. As for the second issue, the collection of structures may become full at any moment when processing the list of identifiers. At that point, a form of garbage collection is applied. The main contract that garbage collection must respect is to keep enough structures for representing at least one of the minimal expressions represented at call time. The counter reCount is also needed to ensure termination when a fixpoint (stable) collection of structures is obtained before a garbage collection call is needed. A simple version of the algorithm is depicted in Figure 2. The expression returns the value of the clock maintained by the collection of structures when the first structure was created. When two identifiers are unified we always replace the younger by the older. The variable idList maintains the current list of identifiers in the collection of structures, sorted in chronological order. (Renamed identifiers are automatically removed from the list by the unify operation. Similarly, garbage collection cleans up the list, keeping only identifiers remaining in the collection of structures.) Finally, termination is ensured by counting how many times an iteration has been performed without making the size of the smallest expression decrease. One can also specify an expectedSize for the minimal expression to possibly avoid useless iterations. The algorithm starts by creating an empty collection of structures, applying the operation toSet to the expression expr to be simplified, and setting the variable mainId to the value of its identifier.

4.2 Difficult issues, solutions and workarounds

Experiments with the algorithm of Figure 2 have revealed several weaknesses: A major problem arises when an an iteration, i.e, any execution of the body of the main loop of the algorithm, up to the garbage collector call is unable to consider each and all of the identifiers existing at the beginning of the iteration. This leads to what we call an early-fixpoint: the algorithm stops after simplifying some sub-expressions well but ignoring completely parts of the whole collection of structures. The major cause of the problem is a kind of unfairness: new structures are generated by applications of the axioms, and they can possibly be taken into account immediately for generating new valuations. We solve this issue by only considering structures that have been created before the current sub-iteration has started, i.e, before the last time that the variable timeLimit has been changed. It is also useful to limit the size of the identifiers in the valuations, as well as the size of the structures created by axiom applications. The general rule is that a structure is acceptable for generating a valuation only if (). Similarly, such a structure can be created only if () where is the existing identifier which is about to be unified with . But there are still situations where the proposed improvements are not powerful enough. Therefore, we have introduced additional workarounds that allow the algorithm to consider more identifiers, and to stops within predictable time. The ultimate and most drastic of these consists of adding a time-out to each iteration.

4.3 Variants of the method

There are still some decisions underlying our algorithm that are not completely made explicit. Moreover, some variants are interesting to investigate.

  • [leftmargin=0in]

  • The kind of valuations we have proposed may be not powerful enough to ensure that all interesting axiom applications are performed. So, we have tried three other kinds of valuations that we identify by a type number between 0 and 3. The ones we have used up to now are given type 0. Valuations of type 3 include all combinations of existing identifiers. They are generated as follows: each time an identifier is taken into account by the algorithm (see Figure 2), all valuations , , and , with are considered for axiom application. This type of valuations is more complete, even exhaustive when multiple axiom application is used (see below), but they more often provoke the early-fixpoint phenomenon. Valuations of types 1 and 2 are somehow intermediate between 0 and 3. They allow more combinations of identifiers but they must stay “close” to in the collection of structures.

  • When a valuation is applied to an axiom, we allow either a strict (unique) application of this particular valuation or multiple application by all valuations obtained by exchanging identifiers between variables in the given valuation. In case of multiple application we normalize the valuations to avoid redundancy.

  • As we have seen, we normally use conditional axiom application. Alternatively, we may choose to apply the axioms freely, without any precondition. Structures may be created that are not related to the expression to simplify, at this time. But further axiom applications may later unify with another identifier that is actually used to represent the current minimal expression. Thus, this gives us a chance to create more interesting structures, in the long run. We call this the bottom-up application of axioms. This method requires us to apply another kind of garbage collection when the memory is full: We only remove structures that are not reachable from the mainId identifier.

  • Many logically equivalent axiom sets can be chosen for any equational theory. When used by a version of our simplification algorithm, their practical value can be very different. Adding more axioms can make the simplification process faster sometimes, but executing more axioms takes more time and it can create more structures, leading to an earlier garbage collector call.

  • We have seen in Example 1 that axioms involving a single variable are especially useful because they create structures containing the identifiers id(0) or id(1) that are later exploited by other axioms. This suggests that, at each iteration, or sub-iteration, we first apply all axioms involving a single variable to all identifiers to which they have not been already applied. We call this the early-application of these axioms. Let us slightly change Example 1 by considering the expression a + b + !b + c. Normal application of the axioms simplifies it to 1 + c, not to 1, because the identifier id(c) has not yet been processed by the algorithm. If we use early-application, the structure already exists, so that the expression is simplified to 1. In fact, this is a general rule: If early-application is used, then a current minimal expression never contains one of the two symbols 0 or 1, except if it is equal to one of them.


5 Experimental Evaluation

We now give an experimental evaluation of our simplification method, applied to boolean expressions. First, we provide statistics about its application to a set of randomly generated expressions. Second, we analyze the results of 13 variants of the algorithm for 7 particular expressions. Third, we delve deeper into the execution of the algorithm on specific expressions.

Table 1: Statistics on simplifying expressions

In the first part of this evaluation, we use the default version of our algorithm (see Table 2). We have randomly generated five sets of 100 boolean expressions, built of the symbols ’.’, ’+’, and ’!’, and of 3, 5, 7, 9, or 16 different letters (i.e., constants). Each expression has a size equal to 800. The size of an expression is the number of symbols needed to write it Polish notation. We have applied the algorithm to every such expression, and collected the size of its simplified version as well as the time needed to compute it. Statistics on this experiment are given in Table 1. In the first column, we give some possible sizes for the simplified expressions. Corresponding to each such size, we provide in each group of two columns the number of simplified expressions that have at most this size and the average time needed to compute them (in seconds). The last line depicts the average size of a simplified expression and the average computation time. Among other things, we see that many expressions are simplified to a single symbol (most often 0 or 1), but their number decreases with the number of letters in the original expression. Also, the simplification time increases with the size of the simplified expression. Note that the reported times are not the actual times spent by the algorithm until termination (except in the case where the minimal size is 1): When the final size is reached, the algorithm still performs twenty iterations – hoping for further simplifications – before it stops.

We now turn to a comparison of different variants of our simplification algorithm. For this comparison, we have selected five typical expressions ( to ) of size 800, using 3, 5, 7, 9, and 16 letters, as well as two “very big” expressions ( and ) of size , using 3 and 9 letters.333For the reviewers: See the optional appendix for more information. The variants of our algorithm are described in Table 2.

Table 2: Description of 13 variants of the algorithm

The first variant, called “default”, is the version used for the first experiment. It proceeds exactly as explained in Section 4.1 and Section 4.2: It uses the standard way to generate valuations (0), and multiple application of the axioms to valuations; it limits the number of sub-iterations to 6, uses conditional axiom application, uses the standard set of axioms, and applies axioms to valuations in the normal way. The other variants use one or more of the ideas explained in Section 4.3: The second column indicates the kind of valuations that are used. A ‘U’ in the third column means that application of valuations to axioms is strict (unique: no permutation is made inside valuations); otherwise, application is multiple. A ‘1’ in the fourth column says that an iteration is limited to a single sub-iteration, but it can be repeated up to times (we set maxReCount to ). ‘BU’ in the fifth column means that bottom-up axiom application of axioms is used. When ‘A+’ is put in the sixth column, the set of axioms is extended with the four non standard axioms at the bottom right of Figure 1. Finally, mentioning ‘1F’ in the seventh column tells us that, in every sub-iteration, axioms using only one variable are all first executed for all identifiers (to which they have not been previously applied) before the axioms using more than one variable are taken into account. Every selected variant differs from the default algorithm by only one feature, except variants , , and , which have been chosen among the best variants using valuations of type 1, 2, or 3, respectively. Variant is an additional choice, which is particularly fast in a single test case (). Results are given in Table 3. (Best results are in bold.) We can make the following comments.

Table 3: Comparison of 13 variants of the algorithm
  • [leftmargin=0in]

  • Except for three test cases, all variants give precise results (size of minimal expressions) but possibly very different execution times, which shows that it is difficult to find a unique best parametrization for the algorithm. The default algorithm always gives most precise results and it is also fastest in four cases. However, it is far outperformed by on . It is also largely outperformed by on but it gives a slightly more precise result on that test case.

  • When the number of letters is small, it is sometimes better to limit the number of sub-iterations, or to use unique instead of multiple axiom application. Moreover, in that case, all variants give the same minimal size, suggesting that minimizing expressions remains easy. For 9 letters and beyond, it may be very difficult to get an expression of minimal size from an “almost minimal” one. For instance, the expression b + (g + a)d + i + !(hfe(d + ag!c)) has a size equal to , and is returned by variant for . It is quite difficult to transform it into an expression of size using the axioms of Figure 1. (The reader should try it.)

  • The variant suggests that applying axioms in the bottom-up way (i.e., freely) is not useful in general since it increases the execution time by up to an order of magnitude.

  • The variant shows that using the extended set of axioms (A+) does not always decrease the execution time. It does so only in two cases, of which the most interesting is . Using the standard axioms is reasonable, in general.

  • The time efficiency of variants to is rather disappointing. The results are precise however, suggesting that valuations of type and could be useful for some applications using other equational axioms.

  • The results for and show that valuations of type 3 (generated in a pure bottom-up way) are unable to simplify the very large expression

    : many identifiers of the initial expression are never taken into account and an early-fixpoint collection of structures is obtained. The algorithm terminates in reasonable time because of the 60-second time-out applied at each iteration. Otherwise, termination would take a time that we can not even estimate. (The first iteration takes more than one day.) Nevertheless, the method works reasonably well for smaller expressions. Since this method attempts to generate all possible valuations, the results for

    suggest that it could be used in contexts where valuations of type 0 are not effective. Note that it is always better, for efficiency, to use the algorithm 1F with valuations of type . This is not the case for valuations of type 0, as shown by the results for var/5.

  • Finally, we observe that the size of the expression is not the major impediment for the simplification task: the ratios of the execution times for and by those for and are much smaller than the ratio of their sizes (125). Actually, for valuations of type 0, 1, and 2, large expressions provide many more valuations than small ones, which makes more simplifications possible.

Table 4: Execution of the default algorithm on expression E9

We now delve deeper into the execution of the algorithms. We give statistics on the execution of the default algorithm for expression in Table 4. Each line of the table gives information about an iteration of the algorithm. For instance, we see in the first column of the first line, that the execution time of the first iteration is seconds. In the second column, we see that the size of a smallest expression at the end of the first iteration is 143. The column nval gives the total number of valuations generated during an iteration, while the column napl gives the number of useful axiom applications. We say that an axiom application is useful when it modifies the collection of structures. The column gives the ratio of napl by nval. The next four columns help us understand how the simplification takes place. For instance, the values and in columns ds01 and nd01 indicate that the size of the minimal expression has decreased times during the first iteration due to an axiom application that uses or (remember Example 1), and that the total reduction in size achieved by those applications is equal to . Columns ods and nods indicate that other axiom applications have reduced the minimal size by symbols. Column nid provides the number of times that an identifier is selected by the algorithm (at the ninth line of Figure 2). Columns and respectively give the number of identifiers and the number of structures in the collection at the beginning of an iteration. Columns and are the corresponding numbers at the end of the iteration, just before the garbage collector is called.

We observe that the first iteration provides most of the simplification. Nevertheless, a lot of work is still to be done since an expression of symbols certainly is much less simple than an expression of symbols. We also see how important the axiom applications that use or are. At the first iteration, they provide of the size decrease. Moreover, the average reduction in size for these axiom applications is , while it is only equal to for other (size reducing) axiom applications. During the next iterations, the decrease slows down and a plateau appears at size , but during the next four iterations the decrease becomes substantial anew, mostly due again to axiom applications using or . Afterwards, we have drawn a horizontal line in the table to stress the fact that these axiom applications are no longer useful, i.e., do not happen anymore. Simplification becomes now a purely combinatorial issue. A plateau of length is traversed before the last two iterations finish the simplification. Some iterations in the plateau take significantly more time than the “productive” iterations: the algorithm is working very hard for apparently nothing. Nevertheless, the way it “shakes” the collection of structures leads to two final simplifications.

6 Conclusion and Future Work

We have presented a method to construct algorithms to simplify expressions based on a set of equations (or axioms). We have shown that many parameters can change the efficiency and precision of such algorithms, and we have proposed a detailed experimental evaluation to show how they work. Our main conclusion is that the method can be useful in practice, but it must be carefully applied.

Many directions for future research are worth considering. We plan to apply our method to simplify various kinds of expressions, such as regular expressions (see [26], for a traditional approach) or usual algebraic expressions, for instance. To reach such goals, it will be useful to extend the method to more general sets of axioms such as Horn clauses [16, 27]. Our current method naturally extends to such axiomatizations, since we already use the conditional application of axioms. The same kind of machinery can be used for triggering Horn clauses or, more generally, conditional axioms (i.e, inference rules). We may also investigate using our method to simplify formulas in theorem provers.

Acknowledgements

The author warmly thanks Pierre Flener, Gauthier van den Hove, and José Vander Meulen for their useful comments about drafts of this paper.

References

  • [1] Mêton Mêton Atindehou. Une structure de données pour représenter de grands ensembles de termes égaux : application à une méthode générale de simplification d’expressions. PhD thesis, UCL – SST/ICTM/INGI – Pôle en ingénierie informatique, 2018.
  • [2] Haniel Barbosa, Pascal Fontaine, and Andrew Reynolds. Congruence closure with free variables. In Axel Legay and Tiziana Margaria, editors, Tools and Algorithms for the Construction and Analysis of Systems - 23rd International Conference, TACAS 2017, Held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2017, Uppsala, Sweden, April 22-29, 2017, Proceedings, Part II, volume 10206 of Lecture Notes in Computer Science, pages 214–230, 2017.
  • [3] Aaron R. Bradley and Zohar Manna. The calculus of computation - decision procedures with applications to verification. Springer, 2007.
  • [4] Randal E. Bryant. Symbolic boolean manipulation with ordered binary-decision diagrams. ACM Comput. Surv., 24(3):293–318, 1992.
  • [5] Baudouin Le Charlier and Mêton Mêton Atindehou. A method to simplify expressions: Intuition and preliminary experimental results. In Boris Konev, Stephan Schulz, and Laurent Simon, editors, IWIL@LPAR 2015, 11th International Workshop on the Implementation of Logics, Suva, Fiji, November 23, 2015, volume 40 of EPiC Series in Computing, pages 37–51. EasyChair, 2015.
  • [6] Baudouin Le Charlier and Mêton Mêton Atindehou. A data structure to handle large sets of equal terms. In 7th International Symposium on Symbolic Computation in Software Science, SCSS 2016, Tokyo, Japan, March 28-31, 2016, pages 81–94, 2016.
  • [7] H. Comon, M. Dauchet, R. Gilleron, C. Löding, F. Jacquemard, D. Lugiez, S. Tison, and M. Tommasi. Tree automata techniques and applications. Available on: http://www.grappa.univ-lille3.fr/tata, 2007. release October, 12th 2007.
  • [8] Olivier Coudert. Two-level logic minimization: an overview. Integration, 17(2):97–140, 1994.
  • [9] Peter J. Downey, Ravi Sethi, and Robert Endre Tarjan. Variations on the common subexpression problem. J. ACM, 27(4):758–771, October 1980.
  • [10] Bernard A. Galler and Michael J. Fischer. An improved equivalence algorithm. Commun. ACM, 7(5):301–303, 1964.
  • [11] Deepak Kapur. Shostak’s congruence closure as completion. In Rewriting Techniques and Applications, pages 23–37. Springer, 1997.
  • [12] Maurice Karnaugh. The map method for synthesis of combinational logic circuits. American Institute of Electrical Engineers, Part I: Communication and Electronics, Transactions of the, 72(5):593–599, 1953.
  • [13] Donald E. Knuth. The Art of Computer Programming, Volume 1 (3rd Ed.): Fundamental Algorithms. Addison Wesley Longman Publishing Co., Inc., Redwood City, CA, USA, 1997.
  • [14] Donald E. Knuth. Satisfiability and the art of computer programming. In Theory and Applications of Satisfiability Testing - SAT 2012 - 15th International Conference, Trento, Italy, June 17-20, 2012. Proceedings, page 15, 2012.
  • [15] Dexter Kozen. Complexity of finitely presented algebras. In

    Proceedings of the 9th Annual ACM Symposium on Theory of Computing, May 4-6, 1977, Boulder, Colorado, USA

    , pages 164–177, 1977.
  • [16] Dexter Kozen. A completeness theorem for kleene algebras and the algebra of regular events. In Proceedings of the Sixth Annual Symposium on Logic in Computer Science (LICS ’91), Amsterdam, The Netherlands, July 15-18, 1991, pages 214–225, 1991.
  • [17] E. J. McCluskey. Minimization of boolean functions*. Bell System Technical Journal, 35(6):1417–1444, 1956.
  • [18] Greg Nelson and Derek C. Oppen. Simplification by cooperating decision procedures. ACM Trans. Program. Lang. Syst., 1(2):245–257, 1979.
  • [19] Greg Nelson and Derek C. Oppen. Fast decision procedures based on congruence closure. J. ACM, 27(2):356–364, 1980.
  • [20] Robert Nieuwenhuis and Albert Oliveras. Proof-Producing Congruence Closure. In Giesl J, editor, Proc. 16th International Conference on Rewriting Techniques and Applications (RTA-2004), Nara, Japan, number 3467 in LNCS. Springer, 2005.
  • [21] Willard V Quine. A way to simplify truth functions. American mathematical monthly, pages 627–631, 1955.
  • [22] Stephan Schulz. System Description: E 1.8. In Proc. of the 19th LPAR, Stellenbosch, volume 8312 of LNCS, pages 735–743. Springer, 2013.
  • [23] R. Sekar, I. V. Ramakrishnan, and Andrei Voronkov. Term indexing. In

    Handbook of Automated Reasoning

    , pages 1853–1964, Amsterdam, The Netherlands, 2001.
  • [24] Robert E. Shostak. An algorithm for reasoning about equality. Commun. ACM, 21(7):583–585, 1978.
  • [25] Wayne Snyder. A fast algorithm for generating reduced ground rewriting systems from a set of ground equations. J. Symb. Comput., 15(4):415–450, 1993.
  • [26] Alley Stoughton. Formal Language Theory: Integrating Experimentation and Proof. Cambridge University Press, 2016.
  • [27] W. Wechler. Universal algebra for computer scientists. EATCS monographs on theoretical computer science. Springer-Verlag, 1992.

Appendix 0.A Appendix: optional material for the reviewers

0.a.1 Goal of this appendix

This appendix is provided to help reviewers assess the contribution of this paper. More examples are given as well as more information about the expressions used in the experimental evaluation (Section 5). More information can be found at
https://www.dropbox.com/sh/infjx6a9x7qc7qe/AABFygzGzWcTSIsSQNd0TdAOa?dl=0. At this web address, the interested reviewers can also find the source code of the Java implementation of our algorithms as well as more test data. All program runs presented in this paper are executed on a MacBook Pro 2.7GHz (Intel Core i5, 8Gb RAM) using Mac OS X 10.12.6. The programs are written in Java, and compiled and executed using Java version 1.7.055. Timings are measured using the method System.nanoTime().

0.a.2 Example expressions

We unveil the expressions , , , , , , used in Section 5. Figure 3 shows the most simplified versions of the expressions while Figure 4 provides the first five original expressions to be simplified; for and , two files are available at the web address given at Section 0.A.1.


Figure 3: Seven simplified expressions

Figure 4: Five big expressions

0.a.3 Another example of how simplification takes place

Example 2

Let us simplify the boolean expression a + ab, using the axioms of Figure 1. By hand, we can simplify it as follows:

a + ab = a1 + ab
= a(1 + b)
= a1
= a

With our method, using valuations, the simplification works as follows: Remember that we generate valuations by enumerating identifiers in chronological order. We consider, in turn: , , , . The first identifier gives rise, among others, to the valuation , which is applied, among others, to the axiom . This application creates the new structure . Later on, the identifier is considered, the valuation is generated, and it is applied to the axiom , which creates the new structure . Still later, the identifier is taken into account. At this step, the valuation is generated because the two structures and currently exist. This valuation is thus applied, among others, to the axiom . Therefore, the operation toSet is applied to and val, which returns because (since exists) and because (since exists). Symmetrically, the operation toSet is applied to and to val, which returns because the structures and exist. Finally, the application of the axiom unifies the identifiers and , which renames into , everywhere in the collection of structures. This is the way a + ab is simplified into a, according to our method. But note also that the collection of structures now contains the structure . So the information that a + ab = a is memorized.

Note 1

In practice, the algorithm implementing our method uses a global variable mainId that remains equal to the identifier of the sets of structures that contains the current minimal expression. The value of mainId is changed each time it is renamed. In the example above, we have, at the beginning, , and after application of the axiom, we have .

In fact, after application of the axiom, we have (for the current collection of structures). The algorithm may replace (the old value of) by (the old value of) , or conversely. But, in practice, we always rename the younger identifier into the older one. This is most efficient since it prevents the algorithm from computing valuations and applying axioms to a younger identifier that in fact is a renaming of an older one that has been previously processed. In the example above, the (old value of) identifier is removed from the collection of structures. So it is not taken into account at all for building valuations and applying them to axioms. The algorithm may terminate immediately after considering .

The last axiom application presented in Example 2 uses a valuation in which the variable is mapped to the identifier . Thus, this axiom application uses although the expression ab, thanks to which the valuation is generated, does not contain 1. This shows that our method of computing valuations is more powerful than one can think at first glance. Note also that consideration of the expression a + ab would not have generated a valuation useful to simplify that expression itself.

0.a.4 Difficult issues, solutions and workarounds

This section is an expanded version of Section 4.2.

The algorithm of Figure 2 is simple. However, experiments with this first version have revealed several weaknesses. Below, we address these problematic issues.

In the following, we call an iteration, any execution of the body of the main loop of the algorithm, up to the garbage collector call. A major problem arises when an iteration is unable to consider each and all of the identifiers existing at the beginning of the iteration. (Looking at Figure 2, this means that the condition time() timeLimit is never evaluated to true during a first execution of the body of the innermost loop of the algorithm (with ).) In that case, some valuations are not generated; parts of the collection of structures remain unexplored. This leads to what we call an early-fixpoint: the algorithm stops after simplifying some sub-expressions well but ignoring completely parts of the whole thing.

The above situation arises because too many valuations have been generated for the identifiers actually taken into account. So, we limit the number of generated valuations as follows. The major cause of the problem is a kind of unfairness: new structures are generated by applications of the axioms, and they can possibly be taken into account immediately for generating new valuations. Therefore, the algorithm has to consider many more valuations than those that would be generated at the start of the iteration. We solve this issue by considering only, for generating valuations, structures that were created before the current sub-iteration has started. (We call a sub-iteration, any segment of the execution of the body of the innermost loop of the algorithm, between two successive evaluations to true of the condition time() timeLimit (see Figure 2).) Experiments have shown that this change is a big improvement in most cases. However, after many other experiments, we have found it to be useful to add another limitation to valuation generation, as well as a similar limitation to axiom application: It may happen that an axiom application creates new structures that are unlikely to help in the simplification process because they are too large. So we limit the size of the identifiers in the valuations, and we limit the size of the structures created by axiom applications. The general rule is that a structure is acceptable for generating a valuation only if (). Similarly, such a structure can be created only if () where is the existing identifier which is about to be unified with .

In many cases, the above changes actually solve the early-fixpoint problem and make the algorithm more efficient without losing precision. But there are still situations where the proposed improvements are not powerful enough. Therefore, we have introduced additional workarounds that allow the algorithm to consider more identifiers, and to stop within predictable time. The first workaround consists of calling eraly a garbage collector inside an iteration whenever the memory assigned to the collection of structures becomes full before the condition time() timeLimit is first evaluated to true; we then continue to iterate normally. The drawback of this expedient is that it can remove promising structures too early. Moreover, it is possible that, after some effective calls, garbage collection recovers almost no memory or even no memory at all. In such cases, completing the iteration can take enormous time, without computing valuations for many identifiers. Thus, as a very last expedient, we add a time-out to each iteration.

There is another early-fixpoint issue related to garbage collection: the garbage collector that is used initially by our algorithm keeps all minimal structures in the collection, i.e., all structures representing sub-expressions of one of the current minimal expressions. Let us call a plateau a sequence of iterations during which the size of minimal expressions does not decrease. Using the standard garbage collector explained above, it is clear that minimal structures kept after some iterations of a plateau are necessarily also present after all subsequent iterations of the plateau. So, there is a big risk of getting a fixpoint collection with a possibly large number of structures. To deal with this problem, we use other forms of garbage collectors, the most restrictive of which only keeps structures representing a single currently minimal expression. The most recent structures are kept. Then, each time we enter a plateau, we switch to another garbage collector, but we stick to the same when the current size of minimal expressions decreases. The intuition is that, when we are stuck in a plateau, we must “shake” the collection of structures to open a new avenue where simplification is possible anew.

0.a.5 Execution of other variants of the algorithm

Table 5: Execution of algorithm var/12 on expression E9

Let us examine the execution of a version of the algorithm very different from the default, namely . Table 5 provides the relevant information. Note that this variant is the most precise and fastest of the versions that use valuations of type . This version uses early-application of axioms involving a single variable (and, in fact, the fastest versions using this kind of valuations also use early-application of these axioms). We see that the first four iterations do most of the job but they take much more time than the iterations of the default version (roughly more than ten times). Many more valuations are generated but they are less productive (which was expected). In this table, column gives the number of times all axioms involving a single variable are applied to an identifer, while nid is the number of times other axioms are applied to the valuations generated for a single identifier. We see that the values in are much greater than the corresponding values in nid, which means that a big part of the work done by early-application is not exploited by the other axioms. The right column of the table () shows that almost all iterations stop because no room is left for creating new structures. This version of the algorithm really uses a kind of breadth-first strategy while the default version uses a more balanced one. In fact, with this version of the algorithm, there is a big risk of getting an early-fixpoint, although it is not actually the case for expression . But the problem arises for expression , as shown in Table 3.