Log In Sign Up

A general class of combinatorial filters that can be minimized efficiently

State minimization of combinatorial filters is a fundamental problem that arises, for example, in building cheap, resource-efficient robots. But exact minimization is known to be NP-hard. This paper conducts a more nuanced analysis of this hardness than up till now, and uncovers two factors which contribute to this complexity. We show each factor is a distinct source of the problem's hardness and are able, thereby, to shed some light on the role played by (1) structure of the graph that encodes compatibility relationships, and (2) determinism-enforcing constraints. Just as a line of prior work has sought to introduce additional assumptions and identify sub-classes that lead to practical state reduction, we next use this new, sharper understanding to explore special cases for which exact minimization is efficient. We introduce a new algorithm for constraint repair that applies to a large sub-class of filters, subsuming three distinct special cases for which the possibility of optimal minimization in polynomial time was known earlier. While the efficiency in each of these three cases appeared, previously, to stem from seemingly dissimilar properties, when seen through the lens of the present work, their commonality now becomes clear. We also provide entirely new families of filters that are efficiently reducible.


page 1

page 2

page 3

page 4


Cover Combinatorial Filters and their Minimization Problem

A recent research theme has been the development of automatic methods to...

Cover Combinatorial Filters and their Minimization Problem (Extended Version)

Recent research has examined algorithms to minimize robots' resource foo...

Nondeterminism subject to output commitment in combinatorial filters

We study a class of filters – discrete finite-state transition systems e...

On nondeterminism in combinatorial filters

The problem of combinatorial filter reduction arises from questions of r...

Accelerating combinatorial filter reduction through constraints

Reduction of combinatorial filters involves compressing state representa...

Soft Constraints of Difference and Equality

In many combinatorial problems one may need to model the diversity or si...

Database Repairing with Soft Functional Dependencies

A common interpretation of soft constraints penalizes the database for e...

I Introduction

Combinatorial filters are discrete transition systems that process streams of observations to produce outputs sequentially. They have found practical application as estimators in multi-agent tracking problems (e.g., 

[18]) and as representations of feedback plans/policies for robots (e.g., [10, 21]). Unlike traditional recursive Bayesian filters (the class of estimator most familiar to roboticists, see [17]), combinatorial filters allow one to ask questions regarding minimality. By reducing their size, one can design resource-efficient robots—a consideration of practical importance. More fundamentally, through filter minimization, one may discover what information is necessary to compute a particular estimate, or what needs to be tracked in order to have sufficient knowledge for a given task. Determining a task’s information requirements and limits is a basic problem with a long history in robotics [2, 8], and has begun gaining interest again (e.g., [9]). Unfortunately, given some combinatorial filter, computing the smallest equivalent filter—its minimizer—is an NP-hard problem.

This paper uncovers and examines two different factors which contribute to this complexity: the first has to do with the structure of the compatibility graph induced by the filter; the second involves the additional auxilliary (or zipper) constraints needed during minimization to ensure the result will be deterministic. As we show, both are distinct dimensions and form independent sources of the problem’s hardness. This is the first contribution of the paper (and constitutes the subject of Section III).

Like most hard problems of practical significance, a line of research has sought specially structured sub-classes of filters that allow efficiency to be salvaged [14]. Another line has examined relaxed [16] or other restricted forms of reduction [15, 11]. In the prior work, three particular sub-classes of filter have been identified for which optimal filter minimization is known to be possible in polynomial time, or for which efficient algorithms have been provided, namely: () no-missing-edge filters [14], () once-appearing-observation filters [14], and () unitary filters [21].

The second portion of the paper builds upon the facts uncovered in the first to establish a new sub-class of filters for which exact minimization is achievable in polynomial time. This sub-class strictly subsumes those of (), (), and (), and also provides some understanding of why both factors —the compatibility graph and auxiliary/zipper constraints— are tame for these filters. Part of the answer is that it is possible to ignore the constraints because they can be repaired afterwards: Section IV introduces an algorithm for constraint repair that applies broadly, including for the new sub-class we study and, hence, the three prior ones as well. Another part of the answer, the requirement to quickly generate minimal clique covers, is feasible for the three prior sub-classes, ()–(), because their compatibility graphs all turn out to be chordal. Thus, their apparent distinctiveness happens to be superficial and, in reality, their efficiency stems from some underlying properties they have in common.

I-a Context, general related work, and roadmap of the paper

As the contributions of this work are of a theoretical nature, we leave the customary motivating settings and example application domains to the work we cite next; each and every one of the following include specific problem instances, so we trust the reader will glance at those papers to allay any doubts as to practical utility. The term combinatorial filter was coined by Tovar et al. [18], and the minimization problem was formulated and its hardness established in [10]. The current state-of-the-art algorithm for combinatorial filter minimization was presented at ICRA’21 in [19]. The starting point for our current treatment is the authors’ paper [20], which showed that filter minimization is equivalent to the classic graph problem of determining the minimal clique cover, when augmented with additional constraints.

The next section will provide necessary definitions and theoretical background. Section III first delineates important sub-families of graphs and uses them to establish our key hardness results. Section IV turns to constraints and provides an algorithm to repair constraint-violating solutions when specific conditions are met. Thereafter, the results are consolidated into a new, efficiently minimizable sub-class of filters and this is connected with prior special sub-classes in Section V. A short summary and conclusion forms the paper’s last section.

Ii Preliminaries

Ii-a Basic definitions

Definition 1 (filter [13]).

A deterministic filter, or just filter, is a 6-tuple , with a non-empty finite set of states, an initial state, the set of observations, the partial function describing transitions, the set of outputs, and being the output function.

Filters process finite sequences of elements from in order to produce a corresponding sequence of outputs (elements of ). Any filter does this by tracing from along edges (defined by the transition function) and producing outputs (via the function) as it visits states. States and are understood to be connected by a directed edge bearing label , if and only if . We will assume to be non-empty, and that no state is unreachable from .

This paper’s central concern is the following problem:

Problem: Filter Minimization ( FM )  Input: A deterministic filter .  Output: A deterministic filter with fewest states, such that: [leftmargin=9pt,itemindent=2pt] any sequence which can be traced on can also be traced on ; the outputs they produce on any of those sequences are identical.

Solving this problem requires some minimally-sized filter  that is functionally equivalent to , where the notion of equivalence —called output simulation— needs only criteria 1. and 2. to be met. For a formal definition of output simulation, see [20, Definition 5, pg. 93].111After examining filters like those here, the later sections of that paper go further by studying a generalization in which function may be a relation. Complications arising from that generalization will not be discussed herein.

Lemma 2 ([10]).

The problem FM is NP-hard.

Ii-B Constrained clique covers

Recently, in giving a minimization algorithm, FM was connected to an equivalent graph covering problem [20]:

Problem: Minimum Zipped Clique Cover ( MZCC )  Input: A graph and the zipper constraints , with .  Output: Minimum cardinality clique cover such that: [leftmargin=9pt,itemindent=2pt,parsep=6pt] , with each forming a clique on ; , if there is some such that , then some must have .

A minimizer of can be obtained from the solution to the minimum zipped vertex cover problem, MZCC , where the graph encodes state compatibility in and the zipper constraints are selected in a specific way that enforces determinism. In particular:

Definition 3 (extensions and compatibility).

For a state of filter , we will use to denote the set of observation sequences, or extensions, that can be traced starting from . Two states and are compatible with each other if their outputs agree on , their common extensions. And, in such cases, we will write .

By creating a vertex for each state in , and adding edges connecting pairs when , one obtains the compatibility graph of , which we denote by  henceforth.

For filter , we form the collection of zipper constraints

as follows. Each individual zipper constraint is an ordered pair of vertex sets, but here each set will contain exactly two vertices. For

, a pair of compatible states in , use to trace forward under observation ; if the -children thus obtained are distinct, we form zipper constraint to ensure that if and are merged (by occupying some cover together), their -children will be as well. Construct  by collecting all such constraints for all compatible pairs, using every observation.

Clearly, both and are polynomial in the size of .

Lemma 4 ([20]).

Any FM can be converted into an MZCC , and hence MZCC is also NP-hard.

Although we skip the details, the proof in [20] of the preceding also gives the means by which a deterministic filter can be constructed from the minimum cardinality clique cover in polynomial time.

Iii Hardness: Reexamined and Refined

The recasting of FM as MZCC leads one naturally to wonder: what precise role do the compatibility graph and zipper constraints play with regards to hardness?

Iii-a Revisiting the original result

Firstly, examining the proof of Lemma 2, the argument in [10] proceeds by reducing the graph -coloring problem to filter minimization. Looking at that construction carefully, one observes that the FM instance that results from any -coloring problem does not have any zipper constraints. Hence, by writing MZCC with compatibility graph of and no zipper constraints as MZCC , we get the following:

Lemma 5.


is NP-hard.


The original construction in [10] is sufficient. ∎

A superficial glance might cause one to think of MZCC with an empty collection of zipper constraints as the standard minimum clique cover problem, viz., №  of Karp’s original NP-complete problems [7]. Actually, Lemma 5 states that the clique cover instances arising in minimization of filters are NP-hard; note that this is neither a direct restatement of Karp’s original fact nor merely entailed by it.

Iii-B Special graphs: efficiently coverable cases

To begin to investigate problems with special structure, our starting point is to recognize that several specific sub-families of undirected graphs (some widely known, others more obscure) allow a minimal clique cover to be obtained efficiently. We formalize such cases with the following.

Definition 6 (efficiently coverable).

A sub-family of graphs is termed efficiently coverable if there is some algorithm  such that, for all , gives a minimal clique cover of and does so in polynomial time.

In filters with efficiently coverable compatibility graphs, when also , then criterion 2. of MZCC will hold vacuously and FM will be efficient. The contrast of this statement with Lemma 5, shows that the efficiently coverable sub-families carve out subsets of easy problems.

Lemmas 8, 10 and 12, and Theorem 13, which will follow, review some instances of efficiently coverable graphs:

Definition 7 (chordal graph [4]).

A graph is chordal if all cycles of four or more vertices have a chord, which is an edge not part of the cycle but which connects two vertices in the cycle.

Lemma 8 ([3]).

Chordal graphs are efficiently coverable.


A classic algorithm for computing a minimum covering by cliques appears in [3], using the existence of a perfect elimination ordering, it being well known (but first established in [12]) that a graph is chordal if and only if it admits such an ordering. ∎

A strictly larger class of graphs are those that are perfect.

Definition 9 (perfect graph [4]).

A perfect graph is a graph where the chromatic number of every induced subgraph equals the order of the largest clique of that subgraph.

Lemma 10 ([5]).

Perfect graphs are efficiently coverable.


The minimum clique cover for a perfect graph can be found in polynomial time  [5, Theorem 9.3.30, pg. 294]. ∎

As all chordal graphs are also perfect, Lemma 8 follows from Lemma 10, and the reader may wonder why then chordal graphs are worthy of explicit mention. Three reasons: (1) checking the requirements of Definition 7 tends to be much less demanding than those in Definition 9, which have a level of indirectness to them; (2) the polynomial-time algorithm of [5] (referenced in proof of Lemma 10) is not a direct combinatorial method and, in fact, researchers continue to contribute practical methods tailored to specific sub-classes of perfect graphs (e.g., [1]); (3) chordal graphs will show up the proofs, including in the next section.

But there are graphs, beyond only those which are perfect, that still give efficiently coverable problems:

Definition 11 (triangle-free graph [4]).

A triangle-free graph is an undirected graph where no three vertices have incident edges forming a triangle.

A specific triangle-free graph that is not perfect is the Grötzsch graph.

Lemma 12.

Triangle-free graphs are efficiently coverable.


A folklore algorithm for computing the minimal clique cover for triangle-free graphs is to compute a maximal matching [4], and then treat unmatched singleton vertices as cliques themselves. ∎

Finally, a type of compositionally allows treatment of graphs with mixed properties. For example, we might have interest in filters with compatibility graphs where some components are perfect, and others are triangle-free. The following fact can be useful in such cases.

Theorem 13 (mix-and-match).

Suppose a graph , where , is made up of components , every , where is partitioned into mutually disjoint set of vertices , and with every . Then, if each of the ’s is efficiently coverable, then is efficiently coverable.


One applies the algorithm associated with each component to that component. The union of their results is a minimum clique cover for and this only requires polynomial time. ∎

Filters easily yield compatibility graphs comprising separate components because compatibility graphs never possess edges between any vertices and where , that is, the output values directly partition the vertices.

Iii-C Special compatibility graphs and non-empty zippers

In light of Lemma 5 showing that zippers are not needed to have hard problems, and the fact that there are sub-families of graphs for which minimal covers may be obtained efficiently, we next ask: do the zipper constraints themselves contribute enough complexity so that even with an efficiently coverable instance, we can get a hard problem?

The answer is in the affirmative and we use the sub-family of chordal graphs to establish this. We begin with a triangulation procedure that, given a general graph, constructs one which is chordal. The approach to the proof is to think about solving MZCC on the chordal version and then relate the solution back to the original problem.

A graph is non-chordal if and only if there is an -cycle with . We can break such a cycle into smaller ones by adding edges. Repeating this process will triangulate such cycles and the procedure must eventually terminate as the complete graph is chordal. We call the newly introduced edges dashed as this is how we shall depict them visually.

Having introduced extra edges, the idea is to discourage clique covers from ever choosing to cover any of these new dashed edges via penalization. A penalty is incurred by being compelled to choose additional cliques—zipper constraints are rich enough to force such choices. This requires the introduction of a gadget we term a ‘necklace’. Suppose the original non-chordal graph had vertices, edges, and that an additional dashed edges were added to triangulate the graph. Then, as illustrated in Figure 1, we first make copies of -vertex connected graphs, which we dub ‘pendants’. These are laid in a line, and between any pair of pendants, we place a single black vertex that we call a ‘bead’. The pendants and beads are strung together via edges, each bead being connected to the two adjacent pendants. We’ll call these connecting edges ‘strings’. To each dashed edge we add zipper constraints, connecting the dashed edge to the length of strings. This construction means that when a dashed edge is covered, its zipper constraints become active and then each bead will have to appear in two separate covers, one for each neighboring pendant.

Given graph and zipper constraints , we will denote the result of the construction just described with , the first element being the chordal graph along with the necklace, and the second element being the additional constraints. Then: (1)  is chordal (dashed edges made chordal, the necklace itself is chordal), (2) , as vertices and edges were added, never removed, (3) , as constraints were added, not removed. Further, notice that and are no larger than some polynomial factor of . This construction takes , i.e., polynomial time.

The whole point of this construction is the following:

Lemma 14.

Given any non-empty graph and zipper constraints , a solution to MZCC can be obtained from any solution to MZCC , by restricting to only those covers on the vertices of .


To cover the necklace (top half of Figure 1) without any zipper constraints being active, at least cliques will be required: there are beads, and the lower vertex of each pendant must be covered, there are a total of of these, and they cannot occupy the same cover as the beads: hence is a lower bound. In contrast, if a zipper constraint is triggered, then to cover the necklace, at least cliques must cover the edges comprising the string, and are required to cover the remaining (lower) vertices in the pendants. The difference between a zipper constraint being triggered () versus not () is a penalty of .

Given minimal cover , now suppose that groups the vertices connected by some dashed edge into the same clique, and thereby triggered a zipper constraint. Then at least cliques are needed (the value in parentheses are required just for the necklace, the extra accounts for the bottom half). But consider the trivial cover : for each pair connected by an edge of the original graph, add the pair as a clique, cover each pendant as a clique containing the pair, and then cover the beads with singleton cliques. Then , but then , as since . The requirement that trigger a zipper constraint is a contradiction, thus never chooses to cover any dashed edges, and the restriction —ignoring covers for the top half— gives a cover for . Denote this restriction . Suppose that is a solution to MZCC with , but then replacing with would yield a smaller solution to MZCC than . Hence, , as required. ∎

Fig. 1: A clique cover problem on a general graph is reduced to a clique cover problem on a chordal graph with extra zipper constraints. Two dashed edges are added to the pentagon in order to triangulate it, resulting in a -edge chordal graph. Dashed edges are made undesirable through the addition of zipper constraints that trigger the necklace string (at top). Zipper constraints are represented as arrows, shown in red and blue to associate them visually with their dashed edge. (Note: parts of some blue arrows have been elided to reduce visual clutter).
Theorem 15.


, where is efficiently coverable and is NP-hard.


One needs to reduce from a known NP-hard problem to an instance of MZCC , where is efficiently coverable. Leveraging Lemma 5, consider any instance of MZCC . Then use Lemma 14 (taking for and for ), to obtain an and , where is chordal and MZCC is an equivalent problem. But is efficiently coverable (Lemma 8), hence the necessary reduction is complete. ∎

The preceding leads to the following interpretation. If a filter induces an efficiently coverable then:

  • when , since criterion 2. of MZCC holds vacuously, FM is in P;

  • when , despite being efficiently coverable, FM should be suspected as intractable because it is NP-hard in the worst case.

Additional support for this link between the filter structure, FM , MZCC , and worst-case intractability is the following:

Theorem 16.

A graph can be realized as the compatibility graph of some filter if and only if either: (1) has at least two connected components, or (2) is a complete graph.


We form a filter with a state for each vertex in . First, identify the set of connected components in ; suppose there are of them. Taking the set of outputs , have the output function, , give each state the number of the component to which it belongs. Next, for any vertices and in that are not directly connected by an edge, suppose their corresponding states in are and , respectively. Then add edges from and to states and , chosen arbitrary but with ; label those two edges with the same observation . When , the preceding will have made all such and disagree on a common extension in , thus, ensuring . When and is complete, no fitting and exist, so no edges will have been added to . Finally, we pick an arbitrary state as an initial state, and adjoin an edge from the initial state to each of the other states in the filer, labeling each edge with an observation unique to it. In the resulting , if two states do not share the same common outgoing observation, then they are compatible. Vacuously, when with complete , all states must be mutually compatible. Therefore, must have as its compatibility graph, .

It remains to show that any single-component graph that is not also a complete graph can never arise as the compatibility graph of any filter. Any edges connecting two vertices in a compatibility graph must involve states that have the same output value. Thus, to generate a graph with a single component, all states must have identical outputs. But then any pair of states must be compatible because their common extensions can only produce identical sequences—just repetitions of the same output. Any graph that is not complete has some pair of vertices not connected by an edge, which has to come from incompatible states, a possibly that has just been precluded. ∎

Corollary 17.

Chordal graphs with zippered necklace structures, , as in the construction involved in Lemma 14, are realizable from filters.


The triangulation of the original graph has at least one connected component, and the necklace forms another, thence condition (1) in Theorem 16 applies. ∎

Iv Repairable Zipper Constraints

If general zipper constraints introduce enough complexity that the problem is hard even when the graph is efficiently coverable, and yet absence of zipper constraints gives an easy problem, how do we obtain a more discriminating conception of zipper constraints and their structure? And, specifically, are there special cases of filters which give ‘nice’ zipper constraints? In this section, we first formalize sufficient conditions in order to ignore zipper constraints; in these cases once a clique cover has been obtained, we can modify it to make the zipper constraints hold. This modification step can repair, in polynomial time, any zipper-constraint-violating clique cover for which the sufficient conditions are met and, crucially, can do this without causing any increase in size.

In any graph , we refer to the neighbors of a vertex by set . Note that we explicitly include in its own neighborhood.

Through neighborhoods, the following condition now describes a type of harmony between zipper constraints and the compatibility relation.

Definition 18 (comparable zip candidates).

Given a graph , it has the comparable zip candidates property with respect to an associated collection of zipper constraints , where , if and only if every pair of vertices satisfies either or , for every .

The preceding definition is a sufficient condition to yield MZCC problems whose zipper constraints may be repaired:

Lemma 19 (repairablity).

Let filter ’s compatibility graph possess the comparable zip candidates property with respect to . Suppose cover of violates . Then there is a cover that will satisfy , and .


The proof constructs by repairing . Suppose zipper constraints are violated. Specifically, for each violated constraint , some clique is such that but none of the has . Specifically, there must be vertices for which no clique contains both and . Notice that because they are downstream vertices in the zipper constraint derived from a filter, hence both are -children of compatible parents. Given a comparable zip candidates , without loss of generality, assume that the and labeling was chosen so that . Their names, with mnemonic source and destination, are chosen because we copy to the clique containing .

Each is grown by including every source vertex associated with some destination vertex in this clique:


After doing this for all , we obtain the collection with (and only if some sets grew to become identical). For this , all zipper constraints will be satisfied because the set operations in (1) copy the source vertex into the clique containing the destination vertex. Then, for all pairs of source and destination vertices, some cover will now contain them both. As a consequence, all zipper constraints are satisfied.

Clearly, still covers all . Next, we will show that each is still a clique. For , let . Since the vertices in form a clique, the set of vertices in are compatible with each other. For every newly added vertex , let denote its corresponding destination vertex. Then, because is compatible with all vertices in , we know . But, as , and, hence, is compatible with all states in .

It remains to show that all states in are mutually compatible. Previously, we showed that every new state (any state in ) is compatible with every existing state (those in ). Symmetrically, every existing state, , is compatible with all new states in . Hence, is included in the neighborhood of , i.e., . For each pair of vertices , , where serves as a destination node for , the source has a larger neighborhood, i.e., . As , we can take and thus have have . But since is any source in , this means that every source in is compatible with all elements of . But no element in is not a source, so this is a clique. ∎

Notice that the scope of Definition 18 includes graphs and zipper constraints generally, while Lemma 19 concerns compatibility graphs and zipper constraints obtained specifically from filters, and the additional structure inherited from the filter shows up in the proof itself.

V Special cases: efficiently reducible filters

Up to this point, sufficient conditions have been presented for favorable MZCC problem instances. Each condition concerns a separate factor: the first, in Section III, deals with structural properties of graphs; while in Section IV, the second involves the zipper constraints being accordant with neighborhoods of potentially zipped vertices. The two reflect different dimensions and, as argued above, form distinct sources of the problem’s hardness. To consolidate—

A sub-class of filters that can be minimized efficiently: Any filter with efficiently coverable compatibility graph and comparable zip candidates can be minimized efficiently.

One first constructs the compatibility graph, finds any minimum clique cover, and then repairs it using Lemma 19.

The requirements for this sub-class, however, pertain to products derived from filters. To avoid properties that must be verified indirectly, as they involve intermediate products, we will now give closer scrutiny to filters themselves.

V-a Handy properties that yield efficiently reducible filters

We seek properties that are recognizable and verifiable on filters directly. To start with, here is a sufficient condition:

Definition 20 (globally language comparable).

A filter is globally language comparable if, for every pair of compatible states , either extensions or extensions .

A lemma will make use of the next property:

Property 21 (condition for transitivity).

Given three states in such that and , we have if they satisfy and .


Since , both and agree on extensions ; similarly means and agree on extensions . To establish the desired fact, state must agree with on all common extensions, but , so each of them agreeing on gives enough coverage to agree mutually. ∎

The first use of this property is in the next lemma, showing that filters that are globally language comparable have compatibility graphs that are efficiently coverable.

Lemma 22.

If is any globally language comparable filter, then its compatibility graph is chordal.


Chordal graphs have no cycle with size or larger; to be chordal, for any two edges that shared a vertex, say, between and , and and , it is enough to show there must be a chord connecting and . But, for any globally language comparable filter, the condition in Property 21 holds, so we know . ∎

In fact, globally language comparable filters also have benign zipper constraints.

Lemma 23.

If is a globally language comparable filter, then compatibility graph possesses the comparable zip candidates property with respect to zipper constraints .


For any pair of states , and zipper constraint set , if , then there exists no such that owing to the construction of . If , since is globally language comparable, assume without loss of generality that . For any state with , as , and agree on their common extensions, i.e. . Hence, . And Definition 18, thus, holds for with respect to . ∎

Theorem 24.

Globally language comparable filters can be minimized efficiently.


One puts Lemmas 22 and 23 together: the former tells us, following Lemma 8, that a minimum clique cover can be obtained quickly; the latter, via Lemma 19, says that cover can then be repaired to satisfy . ∎

A different but also potentially useful way to identify special cases is to use the compatibility relation: for a particular problem instance, one might determine whether specific algebraic properties hold.

As Definition 3 involves common extensions, all compatibility relations on states will be reflexive (, any state being compatible with itself) and symmetric (). But some relations may have additional properties:

Definition 25 (equivalence).

If filter induces a compatibility relation on states  that is transitive, it is an equivalence relation, and termed a compatibility equivalence.

Next, is a property involving neighborhoods.

Definition 26 (neighborhood comparable).

A filter is neighborhood comparable if and only if every pair of vertices in with , satisfies either or .

Though somewhat disguised, the two preceding definitions express the identical concepts.

Property 27.

A filter is neighborhood comparable if and only if it induces a compatibility equivalence.


(via the contrapositive)  Suppose and , but . Then and , and also and . But is not neighborhood comparable as and also, simultaneously, and .

(also via the contrapositive)  Suppose are vertices where and , along with . But then fails to be transitive: , but and . ∎

Further, the neighborhood comparable property induces an efficiently coverable compatibility graph:

Lemma 28.

Any neighborhood comparable filter has a compatibility graph that is chordal.


In the compatibility graph, each equivalence class of states forms a disjoint component, every component being itself a complete graph. Such graphs are called cluster graphs, and are a special sub-class of chordal graphs [4]. ∎

Also we can show that a neighborhood comparable filter has zipper constraints that are repairable.

Lemma 29.

Any neighborhood comparable filter has compatibility graph that possess the comparable zip candidates property with respect to zipper constraints .


The construction process that gives the zipper constraints only places pairs of compatible states in the sets , as both vertices are downstream -children of compatible parents. Suppose then we have since . Filter being neighborhood comparable implies either or , which satisfies the requirements for Definition 18. ∎

Theorem 30.

Neighborhood comparable filters can be minimized efficiently.


Lemma 28 (along with Lemma 8) say that a minimum clique cover can be obtained efficiently, and Lemmas 29 and 19 mean it can be repaired to satisfy . ∎

The properties described in the section —one on the extensions and another on neighborhoods— are useful because they are not difficult requirements to verify and they imply facts about both the compatibility graph and zipper constraints. Still, they are fairly abstract. One might wonder, for instance, whether Definitions 20 and 26 really differ essentially. (They are distinct, as we will see shortly.)

V-B Prior cases in the literature

We now use the conditions just introduced to re-examine three sub-classes of filter for which polynomial-time minimization has been reported in the literature. This treatment provides a new understanding of the relationships between these special cases. To start, each sub-class must be defined.

When discussing the fact that FM is NP-hard, the authors in [10] point out that this may, at first, seem unexpected since minimization of deterministic finite automata (DFA) is efficient (e.g., via the theorem of Myhill–Nerode [6]). As an intuition for this difference, they offer the following perspective: when a sequence crashes on a DFA, that string is outside of the automaton’s language—whereas, when a sequence crashes on a filter, FM

allows the minimizer to select any output for it, and some choices will likely give more compression than others. These degrees-of-freedom represent a combinatorial choice within the

FM problem.

One way to curtail the explosion of such choices, then, is to ensure that no strings can ever crash:

Definition 31 (no-missing-edge filters [14]).

A no-missing-edge filter is a filter with the property that every state has an outgoing edge for every observation in .

For any no-missing-edge filter , we have the language , i.e., the Kleene star of the set of observations.

The authors of [14] also identify a sort of obverse to the foregoing sub-class. If no-missing-edge filters fully re-use observations, every state having all of them, next consider the sub-class where observations occur at precisely one state:

Definition 32 (once-appearing-observation filters [14]).

In a once-appearing-observation filter, each observation in appears at most once.

In a quite different context, the cardinality of the observation set was shown to affect complexity; constraining this gives another sub-class of filters:

Definition 33 (unitary filters [21]).

A unitary filter is a filter with a set of observations that is a singleton, i.e., .

Despite the fact that the preceding three sub-classes impose a diverse assortment of constraints, their common trait is efficiency: any filter belonging to those sub-classes can be minimized in time that is polynomial in the input size [14, Thms. 3 and 4], [21, Thm. 3].

The sub-class of once-appearing-observation filters is disjoint from the no-missing-edge ones, with the sole exception of filters with only a single state. Any unitary filter is (exclusively) either a linear chain, or includes a cycle. In the latter case, every state will have an outgoing edge, and that filter therefore has no edges missing. A unitary filter with multiple states can only have observations that appear once when it is a chain with ; longer chains are neither no-missing-edge nor once-appearing-observation filters.

Fig. 2: The relationships between the three sub-classes of filter in terms of the globally language comparable and neighborhood comparable properties identified. (Note: Trivial filters, such as those with empty language, no vertices, and empty observation set have been omitted.)
Lemma 34.

A no-missing-edge filter is both globally language comparable and neighborhood comparable.


The first is straightforward: all pairs have as every state has . For the second: suppose are in with ( and (. Then () means and both give the same output for every sequence . And ( means the same for and . But then, for every , and must give outputs that are identical, thence . Transitivity having been established, relation is an equivalence for any no-missing-edge . ∎

The remaining two sub-classes will aid in distinguishing the globally language comparable property from the neighborhood comparable property.

Lemma 35.

A once-appearing-observation filter is neighborhood comparable; furthermore, non-trivial instances of such filters are not globally language comparable.


Any two states in a once-appearing-observation filter can only have the empty subsequence as a common extension. Thus, two states are compatible if and only if they have the same output, clearly defining equivalent classes of states, and thus the neighborhood comparable property holds. Any once-appearing-observation filter with a pair of vertices and , each bearing at least one outgoing edge, will have and . ∎

Lemma 36.

Any unitary filter is globally language comparable, but even simple instances of such filters may fail to be neighborhood comparable.


If the filter includes a cycle, then every state has . Otherwise, the language is finite, then ordering the states along the chain gives that any pair and , with have . This state unitary filter , where colors represent outputs, is an example that violates the requirements for neighborhood comparable because and , but . ∎

By way of summary, Figure 2 illustrates the relationships between these filters and the properties of Section V-A. None of the sub-classes strictly subsumes the others, yet common underlying properties explain their efficiency: () their compatibility graphs will be chordal and () their zipper constraints are tame. The route by which these facts are established depends upon the sub-class: using the globally language comparable property, () comes from Lemma 22 and () from Lemma 23; using the neighborhood comparable property, they come from Lemmas 28 and 29, respectively.

V-C A new instance now recognizable as efficiently solvable

Thus, it has turned out that the compatibility graphs of special cases that were previously known are all chordal. This gives a ready means by which extra FM instances can be identified as having efficient solutions, instances that have never previously been recognized as such. One basic example is shown in Figure 3. This filter, , has a compatibility graph that fails to be chordal, but is a perfect graph. Additionally, possesses the comparable zip candidates property because states , , are all mutually compatible. More generally, the perfect graphs include many non-chordal members, including bipartite graphs, Turán graphs, etc. Any filters with these as compatibility graphs (and Theorem 16 says there are such filters) are candidates as additional special cases over above what was previously known; one then needs to ensure their zipper constraints are benign: say, being or repairable.

(a) A simple -state filter ,   x .    with outputs visualized as colors.
(b) Compatibility graph   . .     and zipper constraint set.
Fig. 3: A filter now recognizable as efficiently minimizable.

Vi Conclusion

This paper has identified basic properties underlying instances of filter minimization that are easy to solve. There are two aspects: the structure of the compatibility graph and the determinism-enforcing zipper constraints; they are distinct and both are shown to matter—as NP-hard problems arise if a single aspect is constricted but the other given free rein. Uncovering these facts, the key contribution of the first part of the paper, involves distinguishing and formalizing several subtle properties, proving new hardness results, and devising an efficient algorithm for repair of violated zipper constraints.

The paper then turns to more pragmatic ways the preceding insights can be leveraged: as is usual with NP-hard problems, researchers have sought conditions that identify sub-classes for which minimizers can be found efficiently. The sub-class we identify subsumes the previously known special cases. The paper further improves understanding of previously known sub-classes, detailing their differences and also drawing out commonalities. Finally, the paper gives an example filter which, prior to here, would not be recognized as possessing an efficient solution.


  • [1] F. Bonomo, G. Oriolo, C. Snels, and G. Stauffer (2013) Minimum Clique Cover in Claw-Free Perfect Graphs and the Weak Edmonds–Johnson Property. In

    International Conference on Integer Programming and Combinatorial Optimization (IPCO)

    pp. 86–97. Cited by: §III-B.
  • [2] B. R. Donald (2012) The Compass That Steered Robotics. In Logic and Program Semantics: Essays Dedicated to Dexter Kozen on the Occasion of His 60th Birthday, R. L. Constable and A. Silva (Eds.), pp. 50–65. Note: Logic and Program Semantics: Essays Dedicated to Dexter Kozen on the Occasion of His 60th Birthday External Links: ISBN 978-3-642-29485-3, Document, Link Cited by: §I.
  • [3] F. Gavril (1972) Algorithms for Minimum Coloring, Maximum Clique, Minimum Covering by Cliques, and Maximum Independent Set of a Chordal Graph. SIAM Journal on Computing 1 (2), pp. 180–187. Cited by: §III-B, Lemma 8.
  • [4] J. L. Gross, J. Yellen, and P. Zhang (Eds.) (2013) Handbook of Graph Theory. Second edition, Taylor & Francis Group. Cited by: §III-B, §V-A, Definition 11, Definition 7, Definition 9.
  • [5] M. Grötschel, L. Lovász, and A. Schrijver (1988) Geometric Algorithms and Combinatorial Optimization. Springer, Berlin. Cited by: §III-B, §III-B, Lemma 10.
  • [6] J. E. Hopcroft and J. D. Ullman (1979) Introduction to automata theory, languages, and computation. second edition, Addison-Wesley, Reading, MA. Cited by: §V-B.
  • [7] R. M. Karp (1972) Reducibility among combinatorial problems. In Complexity of computer computations, pp. 85–103. Cited by: §III-A.
  • [8] S. M. LaValle (2010) Sensing and filtering: a fresh perspective based on preimages and information spaces. Foundations and Trends in Robotics 1 (4), pp. 253–372. Cited by: §I.
  • [9] A. Majumdar and V. Pacelli (2022-06) Fundamental Performance Limits for Sensor-Based Robot Control and Policy Learning. In Robotics: Science and Systems, New York City, NY, USA. Cited by: §I.
  • [10] J. M. O’Kane and D. A. Shell (2017) Concise planning and filtering: hardness and algorithms. IEEE Transactions on Automation Science and Engineering 14 (4), pp. 1666–1681. External Links: Document Cited by: §I-A, §I, §III-A, §III-A, §V-B, Lemma 2.
  • [11] H. Rahmani and J. M. O’Kane (2021) Equivalence notions for state-space minimization of combinatorial filters. IEEE Transactions on Robotics 37 (6), pp. 2117–2136. External Links: Document Cited by: §I.
  • [12] D. J. Rose (1970) Triangulated Graphs and the Elimination Process. Journal of Mathematical Analysis and Applications 32 (3), pp. 597–609. Cited by: §III-B.
  • [13] F. Z. Saberifar, S. Ghasemlou, J. M. O’Kane, and D. A. Shell (2016) Set-labelled filters and sensor transformations. In Robotics: Science and Systems, Ann Arbor, Michigan. Cited by: Definition 1.
  • [14] F. Z. Saberifar, A. Mohades, M. Razzazi, and J. M. O’Kane (2017-05) Combinatorial Filter Reduction: Special Cases, Approximation, and Fixed-Parameter Tractability. Journal of Computer and System Sciences 85, pp. 74–92. Cited by: §I, §V-B, §V-B, Definition 31, Definition 32.
  • [15] F. Z. Saberifar, A. Mohades, M. Razzazi, and J. M. O’Kane (2018-06) Improper Filter Reduction. Journal of Algorithms and Computation 50 (1), pp. 69–99. Cited by: §I.
  • [16] F. Z. Saberifar, J. M. O’Kane, and D. Shell (2017) Inconsequential Improprieties: Filter Reduction in Probabilistic Worlds. In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and System, Cited by: §I.
  • [17] S. Thrun, W. Burgard, and D. Fox (2005) Probabilistic Robotics. MIT Press, Cambridge, MA, U.S.A.. Cited by: §I.
  • [18] B. Tovar, F. Cohen, L. Bobadilla, J. Czarnowski, and S. M. Lavalle (2014) Combinatorial filters: sensor beams, obstacles, and possible paths. ACM Transactions on Sensor Networks 10 (3), pp. 1–32. Cited by: §I-A, §I.
  • [19] Y. Zhang, H. Rahmani, D. A. Shell, and J. M. O’Kane (2021) Accelerating combinatorial filter reduction through constraints. In Proceedings of IEEE International Conference on Robotics and Automation, pp. 9703–9709. Cited by: §I-A.
  • [20] Y. Zhang and D. A. Shell (2021) Cover combinatorial filters and their minimization problem. In Algorithmic Foundations of Robotics XIV, pp. 90–106. Cited by: §I-A, §II-A, §II-B, §II-B, Lemma 4.
  • [21] Y. Zhang and D. A. Shell (2022) Nondeterminism subject to output commitment in combinatorial filters. In Algorithmic Foundations of Robotics XV, Cited by: §I, §I, §V-B, Definition 33.