egg: Fast and Extensible E-graphs

04/07/2020
by   Max Willsey, et al.
0

An e-graph efficiently represents a congruence relation over many expressions. Although they were originally developed in the late 1970s for use in automated theorem provers, a more recent technique known as equality saturation repurposes e-graphs to implement state-of-the-art, rewrite-driven compiler optimizations and program synthesizers. However, e-graphs remain unspecialized for this newer use case. Equality saturation workloads exhibit distinct characteristics and often require ad hoc e-graph extensions to incorporate transformations beyond purely syntactic rewrites. This work contributes two techniques that make e-graphs fast and extensible, specializing them to equality saturation. A new amortized congruence closure algorithm called rebuilding takes advantage of equality saturation's distinct workload, providing asymptotic speedups over current techniques in practice. A general mechanism called e-class analyses integrates domain-specific analyses into the e-graph, reducing the need for ad hoc manipulation. We implemented these techniques in a new open-source library called egg. Our case studies on three previously published applications of equality saturation highlight how the flexibility of e-class analyses supports diverse domains and how egg can provide up to 3000x speed ups.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

04/07/2020

egg: Easy, Efficient, and Extensible E-graphs

An E-graph is a data structure that can efficiently encode the congruenc...
11/23/2021

Caviar: An E-graph Based TRS for Automatic Code Optimization

Term Rewriting Systems (TRS) are used in compilers to simplify and prove...
11/25/2021

Sketch-Guided Equality Saturation: Scaling Equality Saturation to Complex Optimizations in Languages with Bindings

Equality saturation is a technique for implementing rewrite-driven compi...
04/19/2018

Connectivity of Ad Hoc Wireless Networks with Node Faults

Connectivity of wireless sensor networks (WSNs) is a fundamental global ...
08/23/2021

Rewrite Rule Inference Using Equality Saturation

Many compilers, synthesizers, and theorem provers rely on rewrite rules ...
09/26/2019

Synthesizing Structured CAD Models with Equality Saturation and Inverse Transformations

Recent program synthesis techniques help users customize CAD models(e.g....
01/05/2021

Equality Saturation for Tensor Graph Superoptimization

One of the major optimizations employed in deep learning frameworks is g...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1. Introduction

Equality graphs (e-graphs) were originally developed to efficiently represent congruence relations in automated theorem provers (ATPs). At a high level, e-graphs (Nelson, 1980; Nieuwenhuis and Oliveras, 2005) extend union-find (Tarjan, 1975) to compactly represent equivalence classes of expressions while maintaining a key invariant: the equivalence relation is closed under congruence.111 Intuitively, congruence simply means that implies .

Over the past decade, several projects have repurposed e-graphs to implement state-of-the-art, rewrite-driven compiler optimizations and program synthesizers using a technique known as equality saturation (Joshi et al., 2002; Tate et al., 2009; Stepp et al., 2011; Nandi et al., 2020; Premtoon et al., 2020; Wang et al., 2020; Panchekha et al., 2015). Given an input program , equality saturation constructs an e-graph that represents a large set of programs equivalent to , and then extracts the “best” program from . The e-graph is grown by repeatedly applying pattern-based rewrites. Critically, these rewrites only add information to the e-graph, eliminating the need for careful ordering. Upon reaching a fixed point (saturation), will represent all equivalent ways to express with respect to the given rewrites. After saturation (or timeout), a final extraction procedure analyzes and selects the optimal program according to a user-provided cost function.

Ideally, a user could simply provide a language grammar and rewrites, and equality saturation would produce a effective optimizer. Two challenges block this ideal. First, maintaining congruence can become expensive as grows. In part, this is because e-graphs from the conventional ATP setting remain unspecialized to the distinct equality saturation workload. Second, many applications critically depend on domain-specific analyses, but integrating them requires ad hoc extensions to the e-graph. The lack of a general extension mechanism has forced researchers to re-implement equality saturation from scratch several times. These challenges limit equality saturation’s practicality.

Equality Saturation Workload. ATPs frequently query and modify e-graphs and additionally require backtracking to undo modifications (e.g., in DPLL(T) (Davis and Putnam, 1960)). These requirements force conventional e-graph designs to maintain the congruence invariant after every operation. In contrast, the equality saturation workload does not require backtracking and can be factored into distinct phases of (1) querying the e-graph to simultaneously find all rewrite matches and (2) modifying the e-graph to merge in equivalences for all matched terms.

We present a new amortized algorithm called rebuilding that defers e-graph invariant maintenance to equality saturation phase boundaries without compromising soundness. Empirically, rebuilding provides asymptotic speedups over conventional approaches.

Domain-specific Analyses. Equality saturation is primarily driven by syntactic rewriting, but many applications require additional interpreted reasoning to bring domain knowledge into the e-graph. Past implementations have resorted to ad hoc e-graph manipulations to integrate what would otherwise be simple program analyses like constant folding.

To flexibly incorporate such reasoning, we introduce a new, general mechanism called e-class analyses. An e-class analysis annotates each e-class (an equivalence class of terms) with facts drawn from a semilattice domain. As the e-graph grows, facts are introduced, propagated, and joined to satisfy the e-class analysis invariant, which relates analysis facts to the terms represented in the e-graph. Rewrites cooperate with e-class analyses by depending on analysis facts and adding equivalences that in turn establish additional facts. Our case studies and examples (Sections 5 and 6) demonstrate e-class analyses like constant folding and free variable analysis which required bespoke customization in previous equality saturation implementations.

egg. We implement rebuilding and e-class analyses in an open-source222https://github.com/mwillsey/egg library called (e-graphs good). egg specifically targets equality saturation, taking advantage of its workload characteristics and supporting easy extension mechanisms to provide e-graphs specialized for program synthesis and optimization. egg also addresses more prosaic challenges, e.g., parameterizing over user-defined lagnguages, rewrites, and cost functions while still providing an optimized implementation. Our case studies demonstrate how ’s features constitute a general, reusable e-graph library that can support equality saturation across diverse domains.

In summary, the contributions of this paper include:

  • Rebuilding (Section 3), a technique that restores key correctness and performance invariants only at select points in the equality saturation algorithm. Our evaluation demonstrates that rebuilding provides an asymptotic speedup over existing techniques in practice.

  • E-class analysis (Section 4), a technique for integrating domain-specific analyses that cannot be expressed as purely syntactic rewrites. The e-class analysis invariant provides the guarantees that enable cooperation between rewrites and analyses.

  • A fast, extensible implementation of e-graphs in a library dubbed (Section 5).

  • Case studies of real-world, published tools that use for deductive synthesis and program optimization across domains such as floating point accuracy, linear algebra optimization, and CAD program synthesis, while achieving up to speed ups (Section 6).

2. Background

builds on e-graphs and equality saturation. This section describes those techniques and presents the challenges that addresses.

2.1.

An e-graph is a data structure that stores a set of terms and a congruence relation over those terms. Originally developed for and still used in the heart of theorem provers (Nelson, 1980; Detlefs et al., 2005; De Moura and Bjørner, 2008), e-graphs have also been used to power a program optimization technique called equality saturation (Joshi et al., 2002; Tate et al., 2009; Stepp et al., 2011; Nandi et al., 2020; Premtoon et al., 2020; Wang et al., 2020; Panchekha et al., 2015).

2.1.1. Definition

An e-graph is a set of equivalence classes (e-classes), and each e-class is a set of equivalent e-nodes. An e-node is a function symbol paired with a list of children, each of which is a reference to an e-class. These references, typically implemented as pointers or integer ids, are stored in a union-find data structure (Tarjan, 1975). Two e-class references and may refer to the same e-class even though they are distinct at the implementation level, . If and do refer to the same e-class, we say they are equivalent, or . We say two e-nodes and are equivalent, or , if they are in the same e-class.

An e-graph, e-class, or e-node is said to represent a term if they can be “found” within it. Two terms are equivalent if they are represented in the same e-class. More precisely:

  • An e-graph represents a term if any of its e-classes do.

  • An e-class represents a term if any e-node does.

  • An e-node represents a term if they have the same function symbol and e-class represents term .

When each e-class is a singleton (containing only one e-node), an e-graph is essentially a syntax tree with sharing (sometimes called a term graph). (a) shows an e-graph that represents the expression .

An e-graph’s congruence invariant states that its equivalence relation over terms must also be a congruence relation. Two e-nodes and are congruent, or , if they have the same function symbol and their child e-classes are equivalent; in other words, . To maintain the congruence invariant, the e-graph must ensure that congruent e-nodes are in the same e-class, i.e., .

(a) Initial e-graph contains .
(b) After applying rewrite .
(c) After applying rewrite .
(d) After applying rewrites and .
Figure 1. An e-graph consists of e-classes (dashed boxes) containing equivalent e-nodes (solid boxes). Edges connect e-nodes to their child e-classes. Additions and modifications are emphasized in black. Applying rewrites to an e-graph adds new e-nodes and edges, but nothing is removed. Expressions added by rewrites are merged with the matched e-class. In (d), the rewrites do not add any new nodes, only merge e-classes. The resulting e-graph has a self-loop, representing infinitely many expressions: , , , and so on.

2.1.2. Interface and Rewriting

bear many similarities to the classic union-find data structure that they employ internally, and they inherit much of the terminology. provide two main low-level mutating operations:

  • add takes an e-node and:

    • if there exists some in e-class such that , returns ;

    • otherwise, it creates a new singleton e-class containing and returns a reference to it.

  • merge (sometimes called assert or union) takes two e-class references and , unions them in the underlying union-find, and combines the actual e-classes if they were not already equivalent.

Both of these operations must take additional steps to maintain the congruence invariant. Invariant maintenance is discussed in Section 3.

also offers operations for querying the data structure:

  • find takes an e-class reference and canonicalizes it using the underlying union-find such that .

  • ematch performs the e-matching (Detlefs et al., 2005; de Moura and Bjørner, 2007) procedure for finding patterns in the e-graph. ematch takes a pattern term with variable placeholders and returns a list of tuples where is a substitution of variables to e-classes such that is represented in e-class .

These can be composed to perform rewriting over the e-graph. To apply a rewrite to an e-graph, ematch finds tuples where e-class represents . Then, for each tuple, merge(, add()) adds to the e-graph and unifies it with the matching e-class c.

Figure 1 shows an e-graph undergoing a series of rewrites. Note how the process is only additive; the initial term is still represented in the e-graph. Rewriting in an e-graph can also saturate, meaning the e-graph has learned every possible equivalence derivable from the given rewrites. If the user tried to apply to an e-graph twice, the second time would add no additional e-nodes and perform no new merges; the e-graph can detect this and stop applying that rule.

2.2. Equality Saturation

Term rewriting (Dershowitz, 1993) is a time-tested approach for equational reasoning in program optimization (Tate et al., 2009; Joshi et al., 2002), theorem proving (Detlefs et al., 2005; De Moura and Bjørner, 2008), and program transformation (Andries et al., 1999). In this setting, a tool repeatedly chooses one of a set of axiomatic rewrites, searches for matches of the left-hand pattern in the given expression, and replaces matching instances with the substituted right-hand side.

Term rewriting is typically destructive and “forgets” the matched left-hand side. Consider applying a simple strength reduction rewrite: . The new term carries no information about the initial term. Applying strength reduction at this point prevents us from canceling out . In the compilers community, this classically tricky question of when to apply which rewrite is called the phase ordering problem.

One solution to the phase ordering problem would simply apply all rewrites simultaneously, keeping track of every expression seen. This eliminates the problem of choosing the right rule, but a naive implementation would require space exponential in the number of given rewrites. Equality saturation (Tate et al., 2009; Stepp et al., 2011) is a technique to do this rewriting efficiently using an e-graph.

1    def equality_saturation(expr, rewrites): 2  egraph = initial_egraph(expr) 3 4  while not egraph.is_saturated_or_timeout(): 5 6    for rw in rewrites: 7      for (subst, eclass) in egraph.ematch(rw.lhs): 8        eclass2 = egraph.add(rw.rhs.subst(subst)) 9        egraph.merge(eclass, eclass2) 10 11  return egraph.extract_best()
Figure 2. Box diagram and pseudocode for equality saturation. Traditionally, equality saturation maintains the e-graph data structure invariants throughout the algorithm.

Figure 2 shows the equality saturation workflow. First, an initial e-graph is created from the input term. The core of the algorithm runs a set of rewrite rules until the e-graph is saturated (or a timeout is reached). Finally, a procedure called extraction selects the optimal represented term according to some cost function. For simple cost functions, a bottom-up, greedy traversal of the e-graph suffices to find the best term. Other extraction procedures have been explored for more complex cost functions (Wang et al., 2020; Wu et al., 2019).

Equality saturation eliminates the tedious and often error-prone task of choosing when to apply which rewrites, promising an appealingly simple workflow: state the relevant rewrites for the language, create an initial e-graph from a given expression, fire the rules until saturation, and finally extract the cheapest equivalent expression. Unfortunately, the technique remains ad hoc; prospective equality saturation users must implement their own e-graphs customized to their language, avoid performance pitfalls, and hack in the ability to do interpreted reasoning that is not supported by purely syntactic rewrites. aims to address each aspect of these difficulties.

2.3. Equality Saturation and Theorem Proving

An equality saturation engine and a theorem prover each have capabilities that would be impractical to replicate in the other. Automated theorem provers like satisfiability modulo theory (SMT) solvers are general tools that, in addition to supporting satisfiability queries, incorporate sophisticated, domain-specific solvers to allow interpreted reasoning within the supported theories. On the other hand, equality saturation is specialized for optimization, and its extraction procedure directly produces an optimal term with respect to a given cost function.

While SMT solvers are indeed the more general tool, equality saturation is not superseded by SMT; the specialized approach can be much faster when the full generality of SMT is not needed. To demonstrate this, we replicated a portion of the recent TASO paper (Jia et al., 2019)

, which optimizes deep learning models. As part of the work, they must verify a set of synthesized equalities with respect to a trusted set of universally quantified axioms. TASO uses Z3 

(De Moura and Bjørner, 2008) to perform the verification even though most of Z3’s features (disjunctions, backtracking, theories, etc.) were not required. An equality saturation engine can also be used for verifying these equalities by adding the left and right sides of each equality to an e-graph, running the axioms as rewrites, and then checking if both sides end up in the same e-class. Z3 takes 24.65 seconds to perform the verification; performs the same task in 1.56 seconds ( faster), or only 0.52 seconds ( faster) when using ’s batched evaluation (Section 5.3).

3. Rebuilding: A New Take on E-graph Invariant Maintenance

Traditionally (Nelson, 1980; Detlefs et al., 2005), e-graphs maintain their data structure invariants after each operation. We separate this invariant restoration into a procedure called rebuilding. This separation allows the client to choose when to enforce the e-graph invariants. Performing a rebuild immediately after every operation replicates the traditional approach to invariant maintenance. In contrast, rebuilding less frequently can amortize the cost of invariant maintenance, significantly improving performance.

In this section, we first give a detailed description of the e-graph invariants and how they are traditionally maintained (Section 3.1). We then describe the rebuilding framework and how it captures a spectrum of invariant maintenance approaches, including the traditional one (Section 3.2). Using this flexibility, we then give a modified algorithm for equality saturation that enforces the e-graph invariants at only select points (Section 3.3). We finally demonstrate that this new approach offers an asymptotic speedup over traditional equality saturation (Section 3.4).

3.1. Hashconsing and Upward Merging

Both mutating operations on the e-graph, add and merge, can break the congruence invariant if not done carefully. have traditionally used hashconsing and upward merging to maintain the congruence invariant.

3.1.1. Hashconsing

Adding two congruent e-nodes should always return equivalent e-classes: if . To efficiently check for congruent e-nodes, e-graphs use a technique called hashconsing. The hashcons data structure maps an e-node to the e-class in which resides.

The hashcons is typically implemented with a hash map or similar data structure. But for add to be correct, it must be able to perform lookups up to congruence, not just structural equality. The hashcons therefore canonicalizes e-nodes before querying (Figure 3 line 3):

  • An e-node is canonical when its child e-class references are canonical. Note that childless e-nodes are always canonical.

  • An e-class reference is canonical when in the underlying union-find.

In addition to canonicalizing the queried e-node, the hashcons must maintain the hashcons invariant: e-node keys in the map must be canonical. This allows the add procedure to quickly detect whether the given e-node is congruent to one already in the e-graph. Upward merging maintains this invariant, updating the hashcons when an e-node’s canonical identity changes.

3.1.2. Upward Merging

Merging e-classes risks wider reaching invariant violations. If and reside in two different e-classes and , merging and should also merge and to maintain congruence. This can propagate further, requiring additional merges.

maintain a parent list for each e-class to maintain congruence. The parent list for e-class holds all e-nodes that reference as a child. When merging two e-classes, e-graphs inspect these parent lists to find parents that are now congruent, recursively merging them if necessary.

The merge routine also performs bookkeeping to the preserve the hashcons invariant. In particular, merging two e-classes may change how parent e-nodes of those e-classes are canonicalized. The merge operation must therefore remove, re-canonicalize, and replace those e-nodes in the hashcons. In existing e-graph implementations (Panchekha et al., 2015) used for equality saturation, maintaining the invariants while merging can take the vast majority of run time.

3.2. Rebuilding in Detail

1    def add(enode): 2  enode = self.canonicalize(enode) 3  if enode in self.hashcons: 4    return self.hashcons[enode] 5  else: 6    eclass = self.new_singleton_eclass(enode) 7    for child_eclass in enode.children: 8      child_eclass.parents.add(enode, eclass) 9    self.hashcons[enode] = eclass 10    return eclass 11 12def merge(eclass1, eclass2) 13  union = self.union_find.union(eclass1, eclass2) 14  if not union.was_already_unioned: 15    # traditional egraph merge can be 16    # emulated by calling rebuild right after 17    # adding the eclass to the worklist 18    self.worklist.add(union.eclass)  19  return union.eclass 20 21def canonicalize(enode)  22  new_ch = [self.find(e) for e in enode.children] 23  return mk_enode(enode.op, new_ch) 24 25def find(eclass): 26  return self.union_find.find(eclass) 27    def rebuild(): 28  while self.worklist.len() > 0: 29    # empty the worklist into a local variable 30    todo = take(self.worklist) 31    # canonicalize and deduplicate the eclass refs 32    # to save calls to repair 33    todo = { self.find(eclass) for eclass in todo } 34    for eclass in todo: 35      self.repair(eclass) 36 37def repair(eclass): 38  # update the hashcons so it always points 39  # canonical enodes to canonical eclasses 40  for (p_node, p_eclass) in eclass.parents: 41    self.hashcons.remove(p_node) 42    p_node = self.canonicalize(p_node) 43    self.hashcons[p_node] = self.find(p_eclass) 44 45  # deduplicate the parents, noting that equal 46  # parents get merged and put on the worklist 47  new_parents = {} 48  for (p_node, p_eclass) in eclass.parents: 49    p_node = self.canonicalize(p_node) 50    if p_node in new_parents: 51      self.merge(p_eclass, new_parents[p_node]) 52    new_parents[p_node] = self.find(p_eclass) 53  eclass.parents = new_parents
Figure 3. Pseudocode for the add, merge, rebuild, and supporting methods. In each method, self refers to the e-graph being modified.

Traditionally, invariant restoration is part of the merge operation itself. Rebuilding separates these concerns, reducing merge’s obligations and allowing for amortized invariant maintenance. In the rebuilding paradigm, merge maintains a worklist of e-classes that need to be “upward merged”, i.e., e-classes whose parents are possibly congruent but not yet in the same e-class. The rebuild operation processes this worklist, restoring the invariants of deduplication and congruence.

Figure 3 shows pseudocode for the main e-graph operations and rebuilding. Note that add and canonicalize are given for completeness, but they are unchanged from the traditional e-graph implementation. The merge operation is similiar, but it only adds the new e-class to the worklist instead of immediately starting upward merging. Adding a call to rebuild right after the addition to the worklist (Figure 3 line 3) would yield the traditional behavior of restoring the invariants immediately.

The rebuild method essentially calls repair on the e-classes from the worklist until the worklist is empty. Instead of directly manipulating the worklist, ’s rebuild method first moves it into a local variable and deduplicates e-classes up to equivalence. Processing the worklist may merge e-classes, so breaking the worklist into chunks ensures that e-class references made equivalent in the previous chunk are deduplicated in the subsequent chunk.

The actual work of rebuild occurs in the repair method. repair examines an e-class and first canonicalizes e-nodes in the hashcons that have as a child. Then it performs what is essentially one “layer” of upward merging: if any of the parent e-nodes have become congruent, then their e-classes are merged and the result is added to the worklist.

Deduplicating the worklist, and thus reducing calls to repair, is at the heart of why deferring rebuilding improves performance. Intuitively, the upward merging process of rebuilding traces out a “path” of congruence through the e-graph. When rebuilding happens immediately after merge (and therefore frequently), these paths can substantially overlap. By deferring rebuilding, the chunk-and-deduplicate approach can coalesce the overlapping parts of these paths, saving what would have been redundant work. In our modified equality saturation algorithm (Section 3.3), deferred rebuilding is responsible for a significant, asymptotic speedup (Section 3.4).

3.2.1. Examples of Rebuilding

Consider the following terms in an e-graph, each nested under function symbols:

Note that corresponds the width of this group of terms, and to the depth. Let the workload be merges that merge all the s together: for .

In the traditional upward merging paradigm where rebuild is called after every merge, each will require calls to repair to maintain congruence, one for each layer of s. Over the whole workload, this requires calls to repair.

With deferred rebuilding, however, the merges can all take place before congruence must be restored. Suppose the s are all merged into an e-class When rebuild finally is called, the only element in the deduplicated worklist is . Calling repair on will merge the e-classes of the e-nodes into an e-class , adding the e-classes that contained those e-nodes back to the worklist. When the worklist is again deduplicated, will be the only element, and the process repeats. Thus, the whole workload only incurs calls to repair, eliminating the factor corresponding to the width of this group of terms. Figure 7 shows that the number calls to repair is correlated with time spent doing congruence maintenance.

Deferred rebuilding also speeds up congruence maintenance by amortizing the work of maintaining the hashcons invariant. Consider the following terms in an e-graph: . Let the workload be . Each merge may change the canonical representation of the s, so the traditional invariant maintenance strategy could require hashcons updates. With deferred rebuilding the merges happen before the hashcons invariant is restored, requiring no more than hashcons updates.

3.2.2. Proof of Congruence

Intuitively, rebuilding is a delay of the upward merging process, allowing the user to choose when to restore the e-graph invariants. They are substantially similar in structure, with a critical a difference in when the code is run. Below we offer a proof demonstrating that rebuilding restores the e-graph congruence invariant.

Theorem 3.1 ().

Rebuilding restores congruence and terminates.

Proof.

Let be the equivalence relation over e-nodes in the e-graph, so iff and are in the same e-class. Let be the congruence closure of , i.e., is the smallest superset of such that iff .

Since rebuilding only merges congruent nodes, is fixed even though changes. When , congruence is restored. Note that both and are finite. We therefore show that rebuilding causes to approach . We define the set of incongruent e-node pairs as ; in other words, if and are equivalent but not congruence closed.

Due to the additive nature of equality saturation, only increases and therefore is non-increasing. However, a call to repair inside the loop of rebuild does not necessarily shrink . Some calls instead remove an element from the worklist but do not modify the e-graph at all.

Let the set be the worklist of e-classes to be processed by repair; in Figure 3, corresponds to self.worklist plus the unprocessed portion of the todo local variable. We show that each call to repair decreases the tuple lexicographically until , and thus rebuilding terminates with .

Given an e-class from , repair examines ’s parents for congruent e-nodes that are not yet in the same e-class:

  • If at least one pair of ’s parents are congruent, rebuilding merges each pair , , which adds to but makes smaller by definition.

  • If no such congruent pairs are found, do nothing. Then, is decreased by 1 since came from the worklist and repair did not add anything back.

Since decreases lexicographically, eventually reaches , so rebuild terminates. Note that contains precisely those e-classes that need to be “upward merged” to check for congruent parents. So, when is empty, rebuild has effectively performed upward merging. By Nelson (1980, Chapter 7), . Therefore, when rebuilding terminates, congruence is restored.

3.3. Rebuilding and Equality Saturation

Rebuilding offers the choice of when to enforce the e-graph invariants, potentially saving work if deferred thanks to the deduplication of the worklist. The client is responsible for rebuilding at a time that maximizes performance without limiting the application.

1      def equality_saturation(expr, rewrites):
2  egraph = initial_egraph(expr)
3
4  while not egraph.is_saturated_or_timeout():
5
6
7    # reading and writing is mixed
8    for rw in rewrites:
9      for (subst, eclass) in egraph.ematch(rw.lhs):
10
11        # in traditional equality saturation,
12        # matches can be applied right away
13        # because invariants are always maintained
14        eclass2 = egraph.add(rw.rhs.subst(subst))
15        egraph.merge(eclass, eclass2)
16
17        # restore the invariants after each merge
18        egraph.rebuild()
19
20  return egraph.extract_best()
(a) Traditional equality saturation alternates between searching and applying rules, and the e-graph maintains its invariants throughout.
1      def equality_saturation(expr, rewrites):
2  egraph = initial_egraph(expr)
3
4  while not egraph.is_saturated_or_timeout():
5    matches = []
6
7    # read-only phase, invariants are preserved
8    for rw in rewrites:
9      for (subst, eclass) in egraph.ematch(rw.lhs):
10        matches.append((rw, subst, eclass))
11
12    # write-only phase, temporarily break invariants
13    for (rw, subst, eclass) in matches:
14      eclass2 = egraph.add(rw.rhs.subst(subst))
15      egraph.merge(eclass, eclass2)
16
17    # restore the invariants once per iteration
18    egraph.rebuild()
19
20  return egraph.extract_best()
(b) splits equality saturation iterations into read and write phases. The e-graph invariants are not constantly maintained, but restored only at the end of each iteration by the rebuild method (Section 3).
Figure 4. Pseudocode for traditional and ’s version of the equality saturation algorithm.

provides a modified equality saturation algorithm to take advantage of rebuilding. Figure 4 shows pseudocode for both traditional equality saturation and ’s variant, which exhibits two key differences:

  1. Each iteration is split into a read phase, which searches for all the rewrite matches, and a write phase that applies those matches.333 Although the original equality saturation paper (Tate et al., 2009) does not have separate reading and writing phases, some e-graph implementations (like the one inside Z3 (De Moura and Bjørner, 2008)) do separate these phases as an implementation detail. Ours is the first algorithm to take advantage of this by deferring invariant maintenance.

  2. Rebuilding occurs only once per iteration, at the end.

’s separation of the read and write phases means that rewrites are truly unordered. In traditional equality saturation, later rewrites in the given rewrite list are favored in the sense that they can “see” the results of earlier rewrites in the same iteration. Therefore, the results depend on the order of the rewrite list if saturation is not reached (which is common on large rewrite lists or input expressions). ’s equality saturation algorithm is invariant to the order of the rewrite list.

Separating the read and write phases also allows to safely defer rebuilding. If rebuilding were deferred in the traditional equality saturation algorithm, rules later in the rewrite list would be searched against an e-graph with broken invariants. Since congruence may not hold, there may be missing equivalences, resulting in missing matches. These matches will be seen after the rebuild during the next iteration (if another iteration occurs), but the false reporting could impact metrics collection, rule scheduling,444 An optimization introduced in Section 5.2 that relies on an accurate count of how many times a rewrite was matched. or saturation detection.

3.4. Evaluating Rebuilding

To demonstrate that deferred rebuilding provides faster congruence closure than traditional upward merging, we modified to call rebuild immediately after every merge. This provides a one-to-one comparison of deferred rebuilding against the traditional approach, isolated from the many other factors that make efficient: overall design and algorithmic differences, programming language performance, and other orthogonal performance improvements.

Figure 5. Rebuilding once per iteration—as opposed to after every merge—significantly speeds up congruence maintenance. Both plots show the same data: one point for each of the 32 tests. The diagonal line is

; points below the line mean deferring rebuilding is faster. In aggregate over all tests (using geometric mean), congruence is

faster, and equality saturation is faster. The linear scale plot shows that deferred rebuilding is significantly faster. The log scale plot suggests the speedup is greater than some constant multiple; Figure 7 demonstrates this in greater detail.
Figure 6. As more rewrites are applied, deferring rebuilding gives greater speedup. Each line represents a single test: each equality saturation iteration plots the cumulative rewrites applied so far against the multiplicative speedup of deferring rebuilding; the dot represents the end of that test. Both the test suite as a whole (the dots) and individual tests (the lines) demonstrate an asymptotic speedup that increases with the problem size.
Figure 7. The time spent in congruence maintenance correlates with the number of calls to the repair method. Spearman correlation yields with a p-value of 3.6e-47, indicating that the two quantities are indeed positively correlated.

We ran ’s test suite using both rebuild strategies, measuring the time spent on congruence maintenance. Each test consists of one run of ’s equality saturation algorithm to optimize a given expression. Of the 32 total tests, 8 hit the iteration limit of 100 and the remainder saturated. Note that both rebuilding strategies use ’s phase-split equality saturation algorithm, and the resulting e-graphs are identical in all cases. These experiments were performed on a 2020 Macbook Pro with a 2 GHz quad-core Intel Core i5 processor and 16GB of memory.

Figure 7 shows our how rebuilding speeds up congruence maintenance. Overall, our experiments show an aggregate speedup on congruence closure and speedup over the entire equality saturation algorithm. Figure 7 shows this speedup is asymptotic; the multiplicative speedup increases as problem gets larger.

’s test suite consists of two main applications: math, a small computer algebra system capable of symbolic differentiation and integration; and lambda, a partial evaluator for the untyped lambda calculus using explicit substitution to handle variable binding (shown in Section 5). Both are typical applications primarily driven by syntactic rewrites, with a few key uses of ’s more complex features like e-class analyses and dynamic/conditional rewrites.

can be configured to capture various metrics about equality saturation as it runs, including the time spent in the read phase (searching for matches), the write phase (applying matches), and rebuilding. In Figure 7, congruence time is measured as the time spent applying matches plus rebuilding. Other parts of the equality saturation algorithm (creating the initial e-graph, extracting the final term) take negligible take compared to the equality saturation iterations.

Deferred rebuilding amortizes the examination of e-classes for congruence maintenance; deduplicating the worklist reduces the number of calls to the repair. Figure 7 shows that time spent in congruence is correlated with the number of calls to the repair methods.

The case study in Section 6.1 provides a further evaluation of rebuilding. Rebuilding (and other features) have also been implemented in a Racket-based e-graph, demonstrating that rebuilding is a conceptual advance that need not be tied to the implementation.

4. Extending with E-class Analyses

As discussed so far, e-graphs and equality saturation provide an efficient way to implement a term rewriting system. Rebuilding enhances that efficiency, but the approach remains designed for purely syntactic rewrites. However, program analysis and optimization typically require more than just syntactic information. Instead, transformations are computed based on the input terms and also semantic facts about that input term, e.g., constant value, free variables, nullability, numerical sign, size in memory, and so on. The “purely syntactic” restriction has forced existing equality saturation applications (Tate et al., 2009; Stepp et al., 2011; Panchekha et al., 2015) to resort to ad hoc passes over the e-graph to implement analyses like constant folding. These ad hoc passes require manually manipulating the e-graph, the complexity of which could prevent the implementation of more sophisticated analyses.

We present a new technique called e-class analysis, which allows the concise expression of a program analysis over the e-graph. An e-class analysis resembles abstract interpretation lifted to the e-graph level, attaching analysis data from a semilattice to each e-class. The e-graph maintains and propagates this data as e-classes get merged and new e-nodes are added. Analysis data can be used directly to modify the e-graph, to inform how or if rewrites apply their right-hand sides, or to determine the cost of terms during the extraction process.

E-class analyses provide a general mechanism to replace what previously required ad hoc extensions that manually manipulate the e-graph. E-class analyses also fit within the equality saturation workflow, so they can naturally cooperate with the equational reasoning provided by rewrites. Moreover, an analysis lifted to the e-graph level automatically benefits from a sort of “partial-order reduction” for free: large numbers of similar programs may be analyzed for little additional cost thanks to the e-graph’s compact representation.

This section provides a conceptual explanation of e-class analyses as well as dynamic and conditional rewrites that can use the analysis data. The following sections will provide concrete examples: Section 5 discusses the implementation and a complete example of a partial evaluator for the lambda calculus; Section 6 discusses how three published projects have used and its unique features (like e-class analyses).

4.1. E-class Analyses

An e-class analysis defines a domain and associates a value to each e-class . The e-class contains the associated data , i.e., given an e-class , one can get easily, but not vice-versa.

The interface of an e-class analysis is as follows, where refers to the e-graph, and and refer to e-nodes and e-classes within :

When a new e-node is added to into a new, singleton e-class , construct a new value to be associated with ’s new e-class, typically by accessing the associated data of ’s children.
When e-classes are being merged into , join into a new value to be associated with the new e-class .
Optionally modify the e-class based on , typically by adding an e-node to . Modify should be idempotent if no other changes occur to the e-class, i.e.,

The domain together with the join operation should form a join-semilattice. The semilattice perspective is useful for defining the analysis invariant (where is the join operation):

The first part of the analysis invariant states that the data associated with each e-class must be the join of the make for every e-node in that e-class. Since is a join-semilattice, this means that . The motivation for the second part is more subtle. Since the analysis can modify an e-class through the modify method, the analysis invariant asserts that these modifications are driven to a fixed point. When the analysis invariant holds, a client looking at the analysis data can be assured that the analysis is “stable” in the sense that recomputing make, join, and modify will not modify the e-graph or any analysis data.

4.1.1. Maintaining the Analysis Invariant

1    def add(enode): 2  enode = self.canonicalize(enode) 3  if enode in self.hashcons: 4    return self.hashcons[enode] 5  else: 6    eclass = self.new_singleton_eclass(enode) 7    for child_eclass in enode.children: 8      child_eclass.parents.add(enode, eclass) 9    self.hashcons[enode] = eclass 10     eclass.data = analysis.make(enode) 11     analysis.modify(eclass) 12    return eclass 13 14def merge(eclass1, eclass2) 15  union = self.union_find.union(eclass1, eclass2) 16  if not union.was_already_unioned: 17    d1, d2 = eclass1.data, eclass2.data 18    union.eclass.data = analysis.join(d1, d2) 19    self.worklist.add(union.eclass) 20  return union.eclass 21    def repair(eclass): 22  for (p_node, p_eclass) in eclass.parents: 23    self.hashcons.remove(p_node) 24    p_node = self.canonicalize(p_node) 25    self.hashcons[p_node] = self.find(p_eclass) 26 27  new_parents = {} 28  for (p_node, p_eclass) in eclass.parents: 29    p_node = self.canonicalize(p_node) 30    if p_node in new_parents: 31      self.union(p_eclass, new_parents[p_node]) 32    new_parents[p_node] = self.find(p_eclass) 33  eclass.parents = new_parents 34 35  # any mutations modify makes to eclass 36  # will add to the worklist 37  analysis.modify(eclass) 38  for (p_node, p_eclass) in eclass.parents: 39    new_data = analysis.join( 40      p_eclass.data, 41      analysis.make(p_node)) 42    if new_data != p_eclass.data: 43      p_eclass.data = new_data 44      self.worklist.add(p_eclass)
Figure 8. The pseudocode for maintaining the e-class analysis invariant is largely similar to how rebuilding maintains congruence closure (Section 3). Only lines 88, 88, and 88 are added. Grayed out or missing code is unchanged from Figure 3.

We extend the rebuilding procedure from Section 3 to restore the analysis invariant as well as the congruence invariant. Figure 8 shows the necessary modifications to the rebuilding code from Figure 3.

Adding e-nodes and merging e-classes risk breaking the analysis invariant in different ways. Adding e-nodes is the simpler case; lines 88 restore the invariant for the newly created, singleton e-class that holds the new e-node. When merging e-nodes, the first concern is maintaining the semilattice portion of the analysis invariant. Since join forms a semilattice over the domain of the analysis data, the order in which the joins occur does not matter. Therefore, line 8 suffices to update the analysis data of the merged e-class.

Since creates analysis data by looking at the data of ’s, children, merging e-classes can violate the analysis invariant in the same way it can violate the congruence invariant. The solution is to use the same worklist mechanism introduced in Section 3. Lines 88 of the repair method (which rebuild on each element of the worklist) re-make and merge the analysis data of the parent of any recently merged e-classes. The new repair method also calls modify once, which suffices due to its idempotence. In the pseudocode, modify is reframed as a mutating method for clarity.

egg’s implementation of e-class analyses assumes that the analysis domain is indeed a semilattice and that modify is idempotent. Without these properties, may fail to restore the analysis invariant on rebuild, or it may not terminate.

4.1.2. Example: Constant Folding

The data produced by e-class analyses can be usefully consumed by other components of an equality saturation system (see Section 4.2), but e-class analyses can be useful on their own thanks to the modify hook. Typical modify hooks will either do nothing, check some invariant about the e-classes being merged, or add an e-node to that e-class (using the regular add and merge methods of the e-graph).

As mentioned above, other equality saturation implementations have implemented constant folding as custom, ad hoc passes over the e-graph. We can formulate constant folding as an e-class analysis that highlights the parallels with abstract interpretation. Let the domain , and let the join operation be the “or” operation of the Option type:
match (a, b) {   (None,    None   ) => None,   (Some(x), None   ) => Some(x),   (None,    Some(y)) => Some(y),   (Some(x), Some(y)) => { assert!(x == y); Some(x) } }
Note how join can also aid in debugging by checking properties about values that are unified in the e-graph; in this case we assert that all terms represented in an e-class should have the same constant value. The make operation serves as the abstraction function, returning the constant value of an e-node if it can be computed from the constant values associated with its children e-classes. The modify operation serves as a concretizaton function in this setting. If is a constant value, then would add to , where concretizes the constant value into a childless e-node.

Constant folding is an admittedly simple analysis, but one that did not formerly fit within the equality saturation framework. E-class analyses support more complicated analyses in a general way, as discussed in later sections on the implementation and case studies (Sections 5 and 6).

4.2. Conditional and Dynamic Rewrites

In equality saturation applications, most of the rewrites are purely syntactic. In some cases, additional data may be needed to determine if or how to perform the rewrite. For example, the rewrite is only valid if . A more complex rewrite may need to compute the right-hand side dynamically based on an analysis fact from the left-hand side.

The right-hand side of a rewrite can be generalized to a function apply that takes a substitution and an e-class generated from e-matching the left-hand side, and produces a term to be added to the e-graph and unified with the matched e-class. For a purely syntactic rewrite, the apply function need not inspect the matched e-class in any way; it would simply apply the substitution to the right-hand pattern to produce a new term.

E-class analyses greatly increase the utility of this generalized form of rewriting. The apply function can look at the analysis data for the matched e-class or any of the e-classes in the substitution to determine if or how to construct the right-hand side term. These kinds of rewrites can broken down further into two categories:

  • Conditional rewrites like that are purely syntactic but whose validity depends on checking some analysis data;

  • Dynamic rewrites that compute the right-hand side based on analysis data.

Conditional rewrites are a subset of the more general dynamic rewrites. Our implementation supports both. The example in Section 5 and case studies in Section 6 heavily use generalized rewrites, as it is typically the most convenient way to incorporate domain knowledge into the equality saturation framework.

4.3. Extraction

Equality saturation typically ends with an extraction phase that selects an optimal represented term from an e-class according to some cost function. In many domains (Panchekha et al., 2015; Nandi et al., 2020), AST size (sometimes weighted differently for different operators) suffices as a simple, local cost function. We say a cost function is local when the cost of a term can be computed from the function symbol and the costs of the children. With such cost functions, extracting an optimal term can be efficiently done with a fixed-point traversal over the e-graph that selects the minimum cost e-node from each e-class (Panchekha et al., 2015).

Extraction can be formulated as an e-class analysis when the cost function is local. The analysis data is a tuple where is the cheapest e-node in that e-class and its cost. The operation calculates the cost based on the analysis data (which contain the minimum costs) of ’s children. The merge operation simply takes the tuple with lower cost. The semilattice portion of the analysis invariant then guarantees that the analysis data will contain the lowest-cost e-node in each class. Extract can then proceed recursively; if the analysis data for e-class gives as the optimal e-node, the optimal term represented in is . This not only further demonstrates the generality of e-class analyses, but also provides the ability to do extraction “on the fly”; conditional and dynamic rewrites can determine their behavior based on the cheapest term in an e-class.

Extraction (whether done as a separate pass or an e-class analysis) can also benefit from the analysis data. Typically, a local cost function can only look at the function symbol of the e-node and the costs of ’s children. When an e-class analysis is attached to the e-graph, however, a cost function may observe the data associated with ’s e-class, as well as the data associated with ’s children. This allows a cost function to depend on computed facts rather that just purely syntactic information. In other words, the cost of an operator may differ based on its inputs. Section 6.2

provides a motivating case study wherein an e-class analysis computes the size and shape of tensors, and this size information informs the cost function.

5. : Easy, Extensible, and Efficient

We implemented the techniques of rebuilding and e-class analysis in , an easy-to-use, extensible, and efficient e-graph library. To the best of our knowledge, is the first general-purpose, reusable e-graph implementation. This has allowed focused effort on ease of use and optimization, knowing that any benefits will be seen across use cases as opposed to a single, ad hoc instance.

This section details ’s implementation and some of the various optimizations and tools it provides to the user. We use an extended example of a partial evaluator for the lambda calculus, for which we provide the complete source code (which few changes for readability) in LABEL:fig:lambda-lang and LABEL:fig:lambda-analysis. While contrived, this example is compact and familiar, and it highlights (1) how is used and (2) some of its novel features like e-class analyses and dynamic rewrites. It demonstrates how can tackle binding, a perennially tough problem for e-graphs, with a simple explicit substitution approach powered by ’s extensibility. Section 6 goes further, providing real-world case studies of published projects that have depended on .

is implemented in ~5000 lines of Rust,555 Rust is a high-level systems programming language. has been integrated into applications written in other programming languages using both C FFI and serialization approaches. including code, tests, and documentation. is open-source, well-documented, and distributed via Rust’s package management system.666 Source: https://github.com/mwillsey/egg. Documentation: https://docs.rs/egg. Package: https://crates.io/crates/egg. All of ’s components are generic over the user-provided language, analysis, and cost functions.

5.1. Ease of Use

1define_language! {
2  enum Lambda {
3    // enum variants have data or children (eclass Ids)
4    // [Id; N] is an array of N ‘Id‘s
5
6    // base type operators
7    "+" = Add([Id; 2]), "=" = Eq([Id; 2]),
8    "if" = If([Id; 3]),
9
10    // functions and binding
11    "app" = App([Id; 2]), "lam" = Lambda([Id; 2]),
12    "let" = Let([Id; 3]), "fix" = Fix([Id; 2]),
13
14    // (var x) is a use of ‘x‘ as an expression
15    "var" = Use(Id),
16    // (subst a x b) substitutes a for (var x) in b
17    "subst" = Subst([Id; 3]),
18
19    // base types have no children, only data
20    Bool(bool), Num(i32), Symbol(String),
21  }
22}
23
24// example terms and what they simplify to
25// pulled directly from the test suite
26
27test_fn! { lambda_under, rules(),
28  "(lam x (+ 4 (app (lam y (var y)) 4)))"
29  => "(lam x 8))",
30}
31
32test_fn! { lambda_compose_many, rules(),
33  "(let compose (lam f (lam g (lam x
34                (app (var f)
35                     (app (var g) (var x))))))
36   (let add1 (lam y (+ (var y) 1))
37   (app (app (var compose) (var add1))
38        (app (app (var compose) (var add1))
39             (app (app (var compose) (var add1))
40                  (app (app (var compose) (var add1))
41                       (var add1)))))))"
42  => "(lam ?x (+ (var ?x) 5))"
43}
44
45test_fn! { lambda_if_elim, rules(),
46  "(if (= (var a) (var b))
47       (+ (var a) (var a))
48       (+ (var a) (var b)))"
49  => "(+ (var a) (var b))"
50}\end{lstlisting}
51\end{subfigure}
52\hfill
53\begin{subfigure}[t]{0.48\linewidth}
54  \begin{lstlisting}[language=Rust, basicstyle=\tiny\ttfamily, escapechar=, numbers=left, firstnumber=51]
55// Returns a list of rewrite rules
56fn rules() -> Vec<Rewrite<Lambda, LambdaAnalysis>> { vec![
57
58 // open term rules
59 rw!("if-true";  "(if  true ?then ?else)" => "?then"),
60 rw!("if-false"; "(if false ?then ?else)" => "?else"),
61 rw!("if-elim";  "(if (= (var ?x) ?e) ?then ?else)" => "?else"
62     if ConditionEqual::parse("(let ?x ?e ?then)",
63                              "(let ?x ?e ?else)")),
64 rw!("add-comm";  "(+ ?a ?b)"        => "(+ ?b ?a)"),
65 rw!("add-assoc"; "(+ (+ ?a ?b) ?c)" => "(+ ?a (+ ?b ?c))"),
66 rw!("eq-comm";   "(= ?a ?b)"        => "(= ?b ?a)"),
67
68 // substitution introduction
69 rw!("fix";     "(fix ?v ?e)" =>
70                "(let ?v (fix ?v ?e) ?e)"),
71 rw!("beta";    "(app (lam ?v ?body) ?e)" =>
72                "(let ?v ?e ?body)"),
73
74 // substitution propagation
75 rw!("let-app"; "(let ?v ?e (app ?a ?b))" =>
76                "(app (let ?v ?e ?a) (let ?v ?e ?b))"),
77 rw!("let-add"; "(let ?v ?e (+   ?a ?b))" =>
78                "(+   (let ?v ?e ?a) (let ?v ?e ?b))"),
79 rw!("let-eq";  "(let ?v ?e (=   ?a ?b))" =>
80                "(=   (let ?v ?e ?a) (let ?v ?e ?b))"),
81 rw!("let-if";  "(let ?v ?e (if ?cond ?then ?else))" =>
82                "(if (let ?v ?e ?cond)
83                     (let ?v ?e ?then)
84                     (let ?v ?e ?else))"),
85
86 // substitution elimination
87 rw!("let-const";    "(let ?v ?e ?c)" => "?c"
88     if is_const(var("?c"))),
89 rw!("let-var-same"; "(let ?v1 ?e (var ?v1))" => "?e"),
90 rw!("let-var-diff"; "(let ?v1 ?e (var ?v2))" => "(var ?v2)"
91     if is_not_same_var(var("?v1"), var("?v2"))),
92 rw!("let-lam-same"; "(let ?v1 ?e (lam ?v1 ?body))" =>
93                     "(lam ?v1 ?body)"),
94 rw!("let-lam-diff"; "(let ?v1 ?e (lam ?v2 ?body))" =>
95     ( CaptureAvoid {
96        fresh: var("?fresh"), v2: var("?v2"), e: var("?e"),
97        if_not_free: "(lam ?v2 (let ?v1 ?e ?body))"
98                     .parse().unwrap(),
99        if_free: "(lam ?fresh (let ?v1 ?e
100                              (let ?v2 (var ?fresh) ?body)))"
101                 .parse().unwrap(),
102     })
103     if is_not_same_var(var("?v1"), var("?v2"))),
104]}\end{lstlisting}
105\end{subfigure}
106\caption[Language and rewrites for the lambda calculus in \egg]{
107\egg is generic over user-defined languages;
108  here we define a language and rewrite rules for a lambda calculus partial evaluator.
109The provided \texttt{define\_language!} macro (lines 1-22) allows the simple definition
110  of a language as a Rust \texttt{enum}, automatically deriving parsing and
111  pretty printing.
112A value of type \texttt{Lambda} is an \enode that holds either data that the
113  user can inspect or some number of \eclass children (\eclass \texttt{Id}s).
114
115Rewrite rules can also be defined succinctly (lines 51-100).
116Patterns are parsed as s-expressions:
117  strings from the \texttt{define\_language!} invocation (ex: \texttt{fix}, \texttt{=}, \texttt{+}) and
118  data from the variants (ex: \texttt{false}, \texttt{1}) parse as operators or terms;
119  names prefixed by ‘‘\texttt{?}’’ parse as pattern variables.
120
121Some of the rewrites made are conditional using the
122  ‘‘\texttt{left => right if cond}’’
123  syntax.
124The \texttt{if-elim} rewrite on line 57 uses \egg’s provided
125  \texttt{ConditionEqual} as a condition, only applying the right-hand side
126  if the \egraph can prove the two argument patterns equivalent.
127The final rewrite, \texttt{let-lam-diff}, is dynamic to support capture avoidance;
128  the right-hand side is a Rust value that
129  implements the \texttt{Applier} trait instead of a pattern.
130\autoref{fig:lambda-analysis} contains the supporting code for these rewrites.
131
132We also show some of the tests (lines 27-50)
133  from \egg’s \texttt{lambda} test suite.
134The tests proceed by inserting the term on the left-hand side, running
135  \egg’s equality saturation, and then checking to make sure the right-hand
136  pattern can be found in the same \eclass as the initial term.
137}
138\label{fig:lambda-rules}
139\label{fig:lambda-lang}
140\label{fig:lambda-examples}
141\end{figure}
142
143%%% Local Variables:
144%%% TeX-master: "egg"
145%%% End:’

’s ease of use comes primarily from its design as a library. By defining only a language and some rewrite rules, a user can quickly start developing a synthesis or optimization tool. Using as a Rust library, the user defines the language using the define_language! macro shown in LABEL:fig:lambda-lang, lines 1-22. Childless variants in the language may contain data of user-defined types, and e-class analyses or dynamic rewrites may inspect this data.

The user provides rewrites as shown in LABEL:fig:lambda-lang, lines 51-100. Each rewrite has a name, a left-hand side, and an right-hand side. For purely syntactic rewrites, the right-hand is simply a pattern. More complex rewrites can incorporate conditions or even dynamic right-hand sides, both explained in the Section 5.2 and LABEL:fig:lambda-applier.

Equality saturation workflows, regardless of the application domain, typically have a similar structure: add expressions to an empty e-graph, run rewrites until saturation or timeout, and extract the best equivalent expressions according to some cost function. This “outer loop” of equality saturation involves a significant amount of error-prone boilerplate:

  • Checking for saturation, timeouts, and e-graph size limits.

  • Orchestrating the read-phase, write-phase, rebuild system (Figure 3) that makes fast.

  • Recording performance data at each iteration.

  • Potentially coordinating rule execution so that expansive rules like associativity do not dominate the e-graph.

  • Finally, extracting the best expression(s) according to a user-defined cost function.

provides these functionalities through its Runner and Extractor interfaces. Runners automatically detect saturation, and can be configured to stop after a time, e-graph size, or iterations limit. The equality saturation loop provided by calls rebuild, so users need not even know about ’s deferred invariant maintenance. Runners record various metrics about each iteration automatically, and the user can hook into this to report relevant data. Extractors select the optimal term from an e-graph given a user-defined, local cost function.777 As mentioned in Section 4.3, extraction can be implemented as part of an e-class analysis. The separate Extractor feature is still useful for ergonomic and performance reasons. The two can be combined as well; users commonly record the “best so far” expression by extracting in each iteration.

LABEL:fig:lambda-lang also shows ’s test_fn! macro for easily creating tests (lines 27-50). These tests create an e-graph with the given expression, run equality saturation using a Runner, and check to make sure the right-hand pattern can be found in the same e-class as the initial expression.

5.2. Extensibility

For simple domains, defining a language and purely syntactic rewrites will suffice. However, our partial evaluator requires interpreted reasoning, so we use some of ’s more advanced features like e-class analyses and dynamic rewrites. Importantly, supports these extensibility features as a library: the user need not modify the e-graph or ’s internals.

1type EGraph = egg::EGraph<Lambda, LambdaAnalysis>; 2struct LambdaAnalysis; 3struct FC { 4  free: HashSet<Id>,    // our analysis data stores free vars 5  constant: Option<Lambda>, // and the constant value, if any 6} 7 8// helper function to make pattern meta-variables 9fn var(s: &str) -> Var { s.parse().unwrap() } 10 11impl Analysis<Lambda> for LambdaAnalysis { 12  type Data = FC; // attach an FC to each eclass 13  // merge implements semilattice join by joining into ‘to‘ 14  // returning true if the ‘to‘ data was modified 15  fn merge(&self, to: &mut FC, from: FC) -> bool { 16    let before_len = to.free.len(); 17    // union the free variables 18    to.free.extend(from.free.iter().copied()); 19    if to.constant.is_none() && from.constant.is_some() { 20      to.constant = from.constant; 21      true 22    } else { 23      before_len != to.free.len() 24    } 25  } 26 27  fn make(egraph: &EGraph, enode: &Lambda) -> FC { 28    let f = |i: &Id| egraph[*i].data.free.iter().copied(); 29    let mut free = HashSet::default(); 30    match enode { 31      Use(v) => { free.insert(*v); } 32      Let([v, a, b]) => { 33        free.extend(f(b)); free.remove(v); free.extend(f(a)); 34      } 35      Lambda([v, b]) | Fix([v, b]) => { 36        free.extend(f(b)); free.remove(v); 37      } 38      _ => enode.for_each_child( 39             |c| free.extend(&egraph[c].data.free)), 40    } 41    FC { free: free, constant: eval(egraph, enode) } 42  } 43 44  fn modify(egraph: &mut EGraph, id: Id) { 45    if let Some(c) = egraph[id].data.constant.clone() { 46      let const_id = egraph.add(c); 47      egraph.union(id, const_id); 48    } 49  } 50}\end{lstlisting} 51\end{minipage} 52\hfill 53\begin{minipage}[t]{0.46\linewidth} 54  \begin{lstlisting}[language=Rust, basicstyle=\tiny\ttfamily, escapechar=@, numbers=left, firstnumber=51] 55// evaluate an enode if the children have constants 56// Rust’s ‘?‘ extracts an Option, early returning if None 57fn eval(eg: &EGraph, enode: &Lambda) -> Option<Lambda> { 58  let c = |i: &Id| eg[*i].data.constant.clone(); 59  match enode { 60    Num(_) | Bool(_) => Some(enode.clone()), 61    Add([x, y]) => Some(Num(c(x)? + c(y)?)), 62    Eq([x, y]) => Some(Bool(c(x)? == c(y)?)), 63    _ => None, 64  } 65} 66 67// Functions of this type can be conditions for rewrites 68trait ConditionFn = Fn(&mut EGraph, Id, &Subst) -> bool; 69 70// The following two functions return closures of the 71// correct signature to be used as conditions in @\autoref{fig:lambda-rules}@. 72fn is_not_same_var(v1: Var, v2: Var) -> impl ConditionFn { 73    |eg, _, subst| eg.find(subst[v1]) != eg.find(subst[v2]) 74} 75fn is_const(v: Var) -> impl ConditionFn { 76     // check the LambdaAnalysis data 77    |eg, _, subst| eg[subst[v]].data.constant.is_some() 78} 79 80struct CaptureAvoid { 81  fresh: Var, v2: Var, e: Var, 82  if_not_free: Pattern<Lambda>, if_free: Pattern<Lambda>, 83} 84 85impl Applier<Lambda, LambdaAnalysis> for CaptureAvoid { 86  // Given the egraph, the matching eclass id, and the 87  // substitution generated by the match, apply the rewrite 88  fn apply_one(&self, egraph: &mut EGraph, 89               id: Id, subst: &Subst) -> Vec<Id> 90  { 91    let (v2, e) = (subst[self.v2], subst[self.e]); 92    let v2_free_in_e = egraph[e].data.free.contains(&v2); 93    if v2_free_in_e { 94      let mut subst = subst.clone(); 95      // make a fresh symbol using the eclass id 96      let sym = Lambda::Symbol(format!("_{}", id).into()); 97      subst.insert(self.fresh, egraph.add(sym)); 98      // apply the given pattern with the modified subst 99      self.if_free.apply_one(egraph, id, &subst) 100    } else { 101      self.if_not_free.apply_one(egraph, id, &subst) 102    } 103  } 104}\end{lstlisting} 105  % \caption{ 106  %   Some of the rewrites in \autoref{fig:lambda-rules} are conditional, 107  %     requiring conditions like \texttt{is\_not\_same\_var} or \texttt{is\_const}. 108  %   Others are fully dynamic, using a custom applier like \texttt{CaptureAvoid} 109  %     instead of a syntactic right-hand side. 110  %   Both conditions and custom appliers can use the computed data from the 111  %     \eclass analysis; for example, \texttt{CaptureAvoid} only $\alpha$-renames if 112  %     there might be a name collision. 113  % } 114\end{minipage} 115\caption[\Eclass analysis and conditional/dynamic rewrites for the lambda calculus]{ 116Our partial evaluator example highlights three important features \egg provides 117  for extensibility: \eclass analyses, conditional rewrites, and dynamic 118  rewrites. 119 120The \texttt{LambdaAnalysis} type, which implements the \texttt{Analysis} trait, 121  represents the \eclass analysis. 122Its associated data (\texttt{FC}) stores 123  the constant term from that \eclass (if any) and 124  an over-approximation of the free variables used by terms in that \eclass. 125The constant term is used to perform constant folding. 126The \texttt{merge} operation implements the semilattice join, combining the free 127  variable sets and taking a constant if one exists. 128In \texttt{make}, the analysis computes the free variable sets based on the 129  \enode and the free variables of its children; 130  the \texttt{eval} generates the new constants if possible. 131The \texttt{modify} hook of \texttt{Analysis} adds the constant to the \egraph. 132 133Some of the conditional rewrites in \autoref{fig:lambda-rules} depend on 134  conditions defined here. 135Any function with the correct signature may serve as a condition. 136 137The \texttt{CaptureAvoid} type implements the \texttt{Applier} trait, allowing 138  it to serve as the right-hand side of a rewrite. 139\texttt{CaptureAvoid} takes two patterns and some pattern variables. 140It checks the free variable set to determine if a capture-avoiding substitution 141  is required, applying the \texttt{if\_free} pattern if so and the 142  \texttt{if\_not\_free} pattern otherwise. 143} 144\label{fig:lambda-applier} 145\label{fig:lambda-analysis} 146\end{figure} 147 148%%% Local Variables: 149%%% TeX-master: "egg" 150%%% End: LABEL:fig:lambda-applier shows the remainder of the code for our lambda calculus partial evaluator. It uses an e-class analysis (LambdaAnalysis) to track free variables and constants associated with each e-class. The implementation of the e-class analysis is in Lines 11-50. The e-class analysis invariant guarantees that the analysis data contains an over-approximation of free variables from terms represented in that e-class. The analysis also does constant folding (see the make and modify methods). The let-lam-diff rewrite (Line 90, LABEL:fig:lambda-rules) uses the CaptureAvoid (Lines 81-100, LABEL:fig:lambda-applier) dynamic right-hand side to do capture-avoiding substitution only when necessary based on the free variable information. The conditional rewrites from LABEL:fig:lambda-rules depend on the conditions is_not_same_var and is_var (Lines 68-74, LABEL:fig:lambda-applier) to ensure correct substitution. is extensible in other ways as well. As mentioned above, Extractors are parameterized by a user-provided cost function. Runners are also extensible with user-provided rule schedulers that can control the behavior of potentially troublesome rewrites. In typical equality saturation, each rewrite is searched for and applied each iteration. This can cause certain rewrites, commonly associativity or distributivity, to dominate others and make the search space less productive. Applied in moderation, these rewrites can trigger other rewrites and find greatly improved expressions, but they can also slow the search by exploding the e-graph exponentially in size. By default, uses the built-in backoff scheduler that identifies rewrites that are matching in exponentially-growing locations and temporarily bans them. We have observed that this greatly reduced run time (producing the same results) in many settings. can also use a conventional every-rule-every-time scheduler, or the user can supply their own.

5.3. Efficiency

’s novel rebuilding algorithm (Section 3) combined with systems programming best practices makes e-graphs—and the equality saturation use case in particular—more efficient than prior tools. is implemented in Rust, giving the compiler freedom to specialize and inline user-written code. This is especially important as ’s generic nature leads to tight interaction between library code (e.g., searching for rewrites) and user code (e.g., comparing operators). is designed from the ground up to use cache-friendly, flat buffers with minimal indirection for most internal data structures. This is in sharp contrast to traditional representations of e-graphs (Nelson, 1980; Detlefs et al., 2005) that contains many tree- and linked list-like data structures. additionally compiles patterns to be executed by a small virtual machine (de Moura and Bjørner, 2007), as opposed to recursively walking the tree-like representation of patterns. Aside from deferred rebuilding, ’s equality saturation algorithm leads to implementation-level performance enhancements. Searching for rewrite matches, which is the bulk of running time, can be parallelized thanks to the phase separation. Either the rules or e-classes could be searched in parallel. Furthermore, the once-per-iteration frequency of rebuilding allows to establish other performance-enhancing invariants that hold during the read-only search phase. For example, sorts e-nodes within each e-class to enable binary search, and also maintains a cache mapping function symbols to e-classes that contain e-nodes with that function symbol. Many of ’s extensibility features can also be used to improve performance. As mentioned above, rule scheduling can lead to great performance improvement in the face of “expansive” rules that would otherwise dominate the search space. The Runner interface also supports user hooks that can stop the equality saturation after some arbitrary condition. This can be useful when using equality saturation to prove terms equal; once they are unified, there is no point in continuing. ’s Runners also support batch simplification, where multiple terms can be added to the initial e-graph before running equality saturation. If the terms are substantially similar, both rewriting and any e-class analyses will benefit from the e-graph’s inherent structural deduplication. The case study in Section 6.1 uses batch simplification to achieve a large speedup with simplifying similar expressions.

6. Case Studies

This section relates three independently-developed, published projects from diverse domains that incorporated as an easy-to-use, high-performance e-graph implementation. In all three cases, the developers had first rolled their own e-graph implementations. egg allowed them to delete code, gain performance, and in some cases dramatically broaden the project’s scope thanks to ’s speed and flexibility. In addition to gaining performance, all three projects use ’s novel extensibility features like e-class analyses and dynamic/conditional rewrites.

6.1. Herbie: Improving Floating Point Accuracy

Herbie automatically improves accuracy for floating-point expressions, using random sampling to measure error, a set of rewrite rules for generating program variants, and algorithms that prune and combine program variants to achieve minimal error. Herbie received PLDI 2015’s Distinguished Paper award (Panchekha et al., 2015) and has been continuously developed since then, sporting hundreds of Github stars, hundreds of downloads, and thousands of users on its online version. Herbie uses e-graphs for algebraic simplification of mathematical expressions, which is especially important for avoiding floating-point errors introduced by cancellation, function inverses, and redundant computation. Until our case study, Herbie used a custom e-graph implementation written in Racket (Herbie’s implementation language) that closely followed traditional e-graph implementations. With timeouts disabled, e-graph-based simplification consumed the vast majority of Herbie’s run time. As a fix, Herbie sharply limits the simplification process, placing a size limit on the e-graph itself and a time limit on the whole procedure. When the timeout is exceeded, simplification fails altogether. Furthermore, the Herbie authors knew of several features that they believed would improve Herbie’s output but could not be implemented because they required more calls to simplification and would thus introduce unacceptable slowdowns. Taken together, slow simplification reduced Herbie’s performance, completeness, and efficacy. We implemented a simplification backend for Herbie. The backend is over faster than Herbie’s initial simplifier and is now used by default as of Herbie 1.4. Herbie has also backported some of ’s features like batch simplification and rebuilding to its e-graph implementation (which is still usable, just not the default), demonstrating the portability of ’s conceptual improvements.

6.1.1. Implementation

Herbie is implemented in Racket while is in Rust; the simplification backend is thus implemented as a Rust library that provides a C-level API for Herbie to access via foreign-function interface (FFI). The Rust library defines the Herbie expression grammar (with named constants, numeric constants, variables, and operations) as well as the e-class analysis necessary to do constant folding. The library is implemented in under 500 lines of Rust. Herbie’s set of rewrite rules is not fixed; users can select which rewrites to use using command-line flags. Herbie serializes the rewrites to strings, and the backend parses and instantiates them on the Rust side. Herbie separates exact and inexact program constants: exact operations on exact constants (such as the addition of two rational numbers) are evaluated and added to the e-graph, while operations on inexact constants or that yield inexact outputs are not. We thus split numeric constants in the Rust-side grammar between exact rational numbers and inexact constants, which are described by an opaque identifier, and transformed Racket-side expressions into this form before serializing them and passing them to the Rust driver. To evaluate operations on exact constants, we used the constant folding e-class analysis to track the “exact value” of each e-class.888Herbie’s rewrite rules guarantee that different exact values can never become equal; the semilattice join checks this invariant on the Rust side. Every time an operation e-node is added to the e-graph, we check whether all arguments to that operation have exact value (using the analysis data), and if so do rational number arithmetic to evaluate it. The e-class analysis is cleaner than the corresponding code in Herbie’s implementation, which is a built-in pass over the entire e-graph.

6.1.2. Results

Figure 9. Herbie sped up its expression simplification phase by adopting -inspired features like batched simplification and rebuilding into its Racket-based e-graph implementation. Herbie also supports using itself for additional speedup. Note that the y-axis is log-scale. Our simplification backend is a drop-in replacement to the existing Herbie simplifier, making it easy to compare speed and results. We compare using Herbie’s standard test suite of roughly 500 benchmarks, with timeouts disabled. Figure 9 shows the results. The simplification backend is over faster than Herbie’s initial simplifier. This speedup eliminated Herbie’s largest bottleneck: the initial implementation dominated Herbie’s total run time at , backporting improvements into Herbie cuts that to about half the total run time, and simplification takes under of the total run time. Practically, the run time of Herbie’s initial implementation was smaller, since timeouts cause tests failures when simplification takes too long. Therefore, the speedup also improved Herbie’s completeness, as simplification now never times out. Since incorporating into Herbie, the Herbie developers have backported some of ’s key performance improvements into the Racket e-graph implementation. First, batch simplification gives a large speedup because Herbie simplifies many similar expressions. When done simultaneously in one equality saturation, the e-graph’s structural sharing can massively deduplicate work. Second, deferring rebuilding (as discussed in Section 3) gives a further speedup. As demonstrated in Figure 7, rebuilding offers an asymptotic speedup, so Herbie’s improved implementation (and the backend as well) will scale better as the search size grows.

6.2. Spores: Optimizing Linear Algebra

Spores (Wang et al., 2020)

is an optimizer for machine learning programs. It translates linear algebra (LA) expressions to relational algebra (RA), performs rewrites, and finally translates the result back to linear algebra. Each rewrite is built up from simple identities in relational algebra like the associativity of join. These relational identities express more fine-grained equality than textbook linear algebra identities, allowing Spores to discover novel optimizations not found by traditional optimizers based on LA identities. Spores performs holistic optimization, taking into account the complex interactions among factors like sparsity, common subexpressions, and fusible operators and their impact on execution time.

6.2.1. Implementation

Spores is implemented entirely in Rust using egg. egg empowers Spores to orchestrate the complex interactions described above elegantly and effortlessly. Spores works in three steps: first, it translates the input LA expression to RA; second, it optimizes the RA expression by equality saturation; finally, it translates the optimized RA expression back to LA. Since the translation between LA and RA is straightforward, we focus the discussion on the equality saturation step in RA. Spores represents a relation as a function from tuples to real numbers: . This is similar to the index notation in linear algebra, where a matrix A can be viewed as a function . A tuple is identified with a named record, e.g. , so that order in a tuple doesn’t matter. There are just three operations on relations: join, union and aggregate. Join () takes two relations and returns their natural join, multiplying the associated real number for joined tuples:
Here is the set of field names for the records in . In RA terminology, is the schema of . Union () is a join in disguise: it also performs natural join on its two arguments, but adds the associated real instead of multiplying it:
Finally, aggregate () sums its argument along a given dimension. It coincides precisely with the “sigma notation” in mathematics:
(1) ( is assoc. & comm.) (2) ( is assoc. & comm.) (3) ( distributes over ) (4) (5) (6) (requires ) (7) (requires )
Figure 10. RA equality rules .
The RA identities, presented in Figure 10, are also simple and intuitive. The notation means is not in the schema of , and is the size of dimension (e.g. length of rows in a matrix). In Equation 6, when , we first rename every to a fresh variable in , which gives us: . In addition to these equalities, Spores also supports replacing expressions with fused operators. For example, can be replaced by which streams values from and computes the result without creating intermediate matrices. Each of these fused operators is encoded with a simple identity in egg. Note that Equation 6

requires a way to store the schema of every expression during optimization. Spores uses an e-class analysis to annotate e-classes with the appropriate schema. It also leverages the e-class analysis for cost estimation, using a conservative cost model that overapproximates. As a result, equivalent expressions may have different cost estimates. The

merge operation on the analysis data takes the lower cost, incrementally improving the cost estimate. Finally, Spores’ e-class analysis also performs constant folding. As a whole, the e-class analysis is a composition of three smaller analyses in a similar style to the composition of lattices in abstract interpretation.

6.2.2. Results

Spores is integrated into Apache SystemML (Boehm, 2019)

in a prototype, where it is able to derive all of 84 hand-written rules and heuristics for sum-product optimization. It also discovered novel rewrites that contribute to

to speedup in end-to-end experiments. With greedy extraction, all compilations completed within a second.

6.3. Szalinski: Decompiling CAD into Structured Programs

Figure 11. (Figure from Nandi et. al. (Nandi et al., 2020)) Existing mesh decompilers turn triangle meshes into flat, computational solid geometry (CSG) expressions. Szalinski (Nandi et al., 2020) takes in these CSG expressions in a format called Core Caddy, and it synthesizes smaller, structured programs in language called Caddy that is enriched with functional-style features. This can ease customization by simplifying edits: small, mostly local changes yield usefully different models. The photo shows the 3D printed hex wrench holder after customizing hole sizes. Szalinski is powered by ’s extensible equality saturation, relying on its high performance, e-class analyses, and dynamic rewrites. Several tools have emerged that reverse engineer high level Computer Aided Design (CAD) models from polygon meshes and voxels (Nandi et al., 2018; Du et al., 2018; Tian et al., 2019; Sharma et al., 2017; Ellis et al., 2018)

. The output of these tools are constructive solid geometry (CSG) programs. A CSG program is comprised of 3D solids like cubes, spheres, cylinders, affine transformations like scale, translate, rotate (which take a 3D vector and a CSG expression as arguments), and binary operators like union, intersection, and difference that combine CSG expressions. For repetitive models like a gear, CSG programs can be too long and therefore difficult to comprehend. A recent tool, Szalinski 

(Nandi et al., 2020), extracts the inherent structure in the CSG outputs of mesh decompilation tools by automatically inferring maps and folds (Figure 11). Szalinski accomplished this using ’s extensible equality saturation system, allowing it to:
Discover structure using loop rerolling rules. This allows Szalinski to infer functional patterns like Fold, Map2, Repeat and Tabulate from flat CSG inputs Identify equivalence among CAD terms that are expressed as different expressions by mesh decompilers. Szalinski accomplishes this by using CAD identities. An example of one such CAD identity in Szalinski is . This implies that any CAD expression is equivalent to a CAD expression that applies a rotation by zero degrees about x, y, and z axes to Use external solvers to speculatively add potentially profitable expressions to the e-graph. Mesh decompilers often generate CSG expressions that order and/or group list elements in non-intuitive ways. To recover structure from such expressions, a tool like Szalinski must be able to reorder and regroup lists that expose any latent structure

6.3.1. Implementation

Even though CAD is different from traditional languages targeted by programming language techniques, supports Szalinski’s CAD language in a straightforward manner. Szalinski uses purely syntactic rewrites to express CAD identities and some loop rerolling rules (like inferring a Fold from a list of CAD expressions). Critically, however, Szalinski relies on ’s dynamic rewrites and e-class analysis to infer functions for lists. Consider the flat CSG program in (b). A structure finding rewrite first rewrites the flat list of Unions to:
(Fold Union (Map2 Translate [(0 0 0) (2 0 0) ...] (Repeat Cube 5)))
The list of vectors is stored as Cons elements (sugared above for brevity). Szalinski uses an e-class analysis to track the accumulated lists in a similar style to constant folding. Then, a dynamic rewrite uses an arithmetic solver to rewrite the concrete list of 3D vectors in the analysis data to (Tabulate (i 5) (* 2 i)). A final set of syntactic rewrites can hoist the Tabulate, yielding the result on the right of Figure 12. Thanks to the set of syntactic CAD rewrites, this structure finding even works in the face of CAD identities. For example, the original program may omit the no-op Translate (0 0 0), even though it is necessary to see repetitive structure.
(a) Five cubes in a line.     (Union   (Translate (0 0 0) Cube)   (Translate (2 0 0) Cube)   (Translate (4 0 0) Cube)   (Translate (6 0 0) Cube)   (Translate (8 0 0) Cube)) (b) Flat CSG input to Szalinski.     (Fold Union   (Tabulate (i 5)     (Translate       ((* 2 i) 0 0)       Cube))) (c) Output captures the repetition.
Figure 12. Szalinski integrates solvers into ’s equality saturation as a dynamic rewrite. The solver-backed rewrites can transform repetitive lists into Tabulate expressions that capture the repetitive structure.
In many cases, the repetitive structure of input CSG expression is further obfuscated because subexpressions may appear in arbitrary order. For these inputs, the arithmetic solvers must first reorder the expressions to find a closed form like a Tabulate as shown in Figure 12. However, reordering a list does not preserve equivalence, so adding it to the e-class of the concrete list would be unsound. Szalinski therefore introduces inverse transformations, a novel technique that allows solvers to speculatively reorder and regroup list elements to find a closed form. The solvers annotate the potentially profitable expression with the permutation or grouping that led to the successful discovery of the closed form. Later in the rewriting process, syntactic rewrites eliminate the inverse transformations when possible (e.g., reordering lists under a Fold Union can be eliminated). supported this novel technique without modification.

6.3.2. Results

Szalinski’s initial protoype used a custom e-graph written in OCaml. Anecdotally, switching to removed most of the code, eliminated bugs, facilitated the key contributions of solver-backed rewrites and inverse transformations, and made the tool about faster. ’s performance allowed a shift from running on small, hand-picked examples to a comprehensive evaluation on over 2000 real-world models from a 3D model sharing forum (Nandi et al., 2020).

7. Related Work

Term Rewriting
Term rewriting (Dershowitz and Jouannaud, 1990) has been used widely to facilitate equational reasoning for program optimizations (Boyle et al., 1996; van den Brand et al., 2002; Visser et al., 1998)

. A term rewriting system applies a database of semantics preserving rewrites or axioms to an input expression to get a new expression, which may, according to some cost function, be more profitable compared to the input. Rewrites are typically symbolic and have a left hand side and a right hand side. To apply a rewrite to an expression, a rewrite system implements pattern matching—if the left hand side of a rewrite rule matches with the input expression, the system computes a substitution which is then applied to the right-hand side of the rewrite rule. Upon applying a rewrite rule, a rewrite system typically replaces the old expression by the new expression. This can lead to the

phase ordering problem— it makes it impossible to apply a rewrite to the old expression in the future which could have led to a more optimal result.
E-graphs and E-matching
The e-graph data structure was first introduced by Greg Nelson (Nelson, 1980). Nelson used e-graphs as an efficient data structure for maintaining congruence closure in the context of combining satisfiability theories by sharing equality information. continued to be a critical component in successful SMT solvers (De Moura and Bjørner, 2008). A key difference between past implementations of e-graphs and ’s e-graph is our novel rebuilding algorithm. The algorithm maintains invariants only at certain critical points (Section 3). As a result it makes more efficient for the purpose of equality saturation. implements the pattern compilation strategy introduced by de Moura et al. (de Moura and Bjørner, 2007) that is used in state of the art theorem provers (De Moura and Bjørner, 2008). Some provers (De Moura and Bjørner, 2008; Detlefs