# Non-Count Symmetries in Boolean & Multi-Valued Prob. Graphical Models

Lifted inference algorithms commonly exploit symmetries in a probabilistic graphical model (PGM) for efficient inference. However, existing algorithms for Boolean-valued domains can identify only those pairs of states as symmetric, in which the number of ones and zeros match exactly (count symmetries). Moreover, algorithms for lifted inference in multi-valued domains also compute a multi-valued extension of count symmetries only. These algorithms miss many symmetries in a domain. In this paper, we present first algorithms to compute non-count symmetries in both Boolean-valued and multi-valued domains. Our methods can also find symmetries between multi-valued variables that have different domain cardinalities. The key insight in the algorithms is that they change the unit of symmetry computation from a variable to a variable-value (VV) pair. Our experiments find that exploiting these symmetries in MCMC can obtain substantial computational gains over existing algorithms.

## Authors

• 8 publications
• 6 publications
• 15 publications
• 25 publications
• ### Contextual Symmetries in Probabilistic Graphical Models

An important approach for efficient inference in probabilistic graphical...
06/30/2016 ∙ by Ankit Anand, et al. ∙ 0

• ### Block-Value Symmetries in Probabilistic Graphical Models

Several lifted inference algorithms for probabilistic graphical models f...
07/02/2018 ∙ by Gagan Madan, et al. ∙ 0

• ### Bisimilar Conversion of Multi-valued Networks to Boolean Networks

Discrete modelling frameworks of Biological networks can be divided in t...
01/21/2020 ∙ by Franck Delaplace, et al. ∙ 0

• ### Lifted Probabilistic Inference for Asymmetric Graphical Models

Lifted probabilistic inference algorithms have been successfully applied...
12/01/2014 ∙ by Guy Van den Broeck, et al. ∙ 0

• ### Mixing properties of Skellam-GARCH processes

We consider integer-valued GARCH processes, where the count variable con...
05/25/2020 ∙ by Paul Doukhan, et al. ∙ 0

• ### Soft Constraints for Inference with Declarative Knowledge

We develop a likelihood free inference procedure for conditioning a prob...
01/16/2019 ∙ by Zenna Tavares, et al. ∙ 2

• ### Unsupervised Learning with Truncated Gaussian Graphical Models

Gaussian graphical models (GGMs) are widely used for statistical modelin...
11/15/2016 ∙ by Qinliang Su, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1 Introduction

A popular approach for efficient inference in probabilistic graphical models (PGMs) is lifted inference (see [11]), which identifies repeated sub-structures (symmetries), and exploits them for computational gains. Lifted inference algorithms typically cluster symmetric states (variables) together and use these clusters to reduce computation, for example, by avoiding repeated computation for all members of a cluster via a single representative. Lifted versions of several inference algorithms have been developed such as variable elimination [21, 6], weighted model counting [8], knowledge compilation [26], belief propagation [23, 10, 24], variational inference [2]

[19, 16]

and Markov Chain Monte Carlo (MCMC)

[28, 9, 17, 25, 1].

Unfortunately, to the best of our knowledge, all algorithms compute a limited notion of symmetries, which we call count symmetries. A count symmetry in a Boolean-valued domain is a symmetry between two states where the total number of zeros and ones exactly match. An illustrative algorithm for Boolean-valued PGMs (which we build upon) is Orbital MCMC [17]. It first uses graph isomorphism to compute symmetries and later uses these symmetries in an MCMC algorithm. Symmetries are represented via permutation groups in which variables interchange values to create other symmetric states. Notice, that if a state has ones then any permutation of that state will also have ones; this algorithm can only compute count symmetries.

Similarly, lifted inference algorithms for multi-valued PGMs (e.g., [21, 2]), only compute a weak extension of count symmetries for multi-valued domains – they allow symmetries only between those sets of variables that have the same domain. And, the count, i.e. the number of occurences, of any value (from the domain) within this set of variables remains the same between two symmetric states.

In response, we develop extensions to existing frameworks to enable computation of non-count symmetries in which the count of a value between symmetric states can change. We can also compute a special form of non-count symmetries, non-equicardinal symmetries in multi-valued domains, in which two variables that have different domain sizes may be symmetric. Our key insight is the framework of symmetry groups over variable-value (VV) pairs, instead of just variables. It allows interchanging a specific value of a variable with a different value of a different variable.

Orbital MCMC suffices for downstream inference over most kinds of symmetries except non-equicardinal ones, for which a Metropolis Hastings extension is needed. Our new symmetries lead to substantial computational gains over Orbital MCMC and vanilla Gibbs Sampling, which doesn’t exploit any symmetries. We make the following contributions:

1. We develop a novel framework for symmetries between variable-value (VV) pairs, which generalize existing notions of variable symmetries (Section 3).

2. We develop an extension of this framework, which can also identify Non-Equicardinal (NEC) symmetries, i.e., among variables of different cardinalities (Section 4).

3. We design a Metropolis Hastings version of Orbital MCMC called NEC-Orbital MCMC to exploit NEC symmetries (Section 5).

4. We experimentally show that our proposed algorithms significantly outperform strong baseline algorithms (Section 6). We also release the code for wider use.

## 2 Background

Let denote a set of Boolean valued variables. A state is a complete assignment to variables in , with values . We will use the symbol to denote the entire state space.

A permutation of is a bijection of the set onto itself. denotes the application of on the variable . We will refer to as a variable permutation. A permutation applies on state to produce , the state obtained by permuting the value of each variable in to that of . A set of permutations is called a permutation group if it is closed under composition, contains the identity permutation, and each has its inverse in the set.

A graphical model over the set of variables is defined as the set of pairs where is a feature function over a subset of variables in and is the corresponding weight [12]. Drawing parallels from automporphism of a graph where a variable permutation maps the graph back to itself, we define the notion of automorphism (referred to as symmetry, henceforth) of a graphical model as follows [18].

###### Definition 2.1.

A permutation of is a variable symmetry of if application of on results back in itself, i.e., the same set as in . We also call such permutations as variable permutations.

Correspondingly, we define the autormorphism group of a graphical model.

###### Definition 2.2.

An automorphism group of a graphical model is a permutation group such that , is a variable symmetry of .

Another important concept is the notion of an orbit of a state resulting from the application of a permutation group.

###### Definition 2.3.

The orbit () of a state under the permutation group , denoted by , is the set of states resulting from application of permutations on , i.e., .

Note that orbits form an equivalence partition of the entire state space. In this work, we are interested in orbits obtained by application of an automorphism group, because all states in such an orbit have the same joint probability. Let

denote the joint probability of a state under .

###### Theorem 2.1.

Let be an automorphism group of . Then for all states and permutations : .

### 2.1 Graph Isomorphism for Computing Symmetries

The procedure for computing an automorphism group [17] first constructs a colored graph from the graphical model , in which all features are clausal or all features are conjunctive.333Each model can be pre-converted to a new model in which all features are clausal. In this graph there are two nodes for each variable, one for each literal, and a node for each feature in . There is an edge between two literal nodes of a variable, and between a literal node and a feature if that literal appears in that feature in the graphical model. Each node is assigned a color such that all 1 value nodes get the same color, all 0 value nodes get the same color (but different from 1 node color), and all feature nodes get a unique color based on their weight. That is, two feature nodes have the same color if their weights in are the same.

A graph isomorphism solver (e.g., Saucy [5]) over outputs the automorphism group of this graph through a set of permutations. These permutations can be easily converted to variable permutations of , because any output permutation always maps a variable’s 0 and 1 nodes to another variable’s 0 and 1 nodes, respectively. These permutations collectively represent an automorphism group of .

### 2.2 Orbital Markov Chain Monte Carlo

Markov Chain Monte Carlo (MCMC) methods are one of most popular methods for inference where exact inference is hard. In these methods, a Markov chain is set up over the state space and samples are generated. Running the chain for a sufficiently long time, starts generating samples from the true distribution. Gibbs sampling is one of the simplest MCMC methods.

Orbital MCMC [17] adapts MCMC to use the given variable symmetries of the graphical model . Given a Markov Chain and starting from state , Orbital MCMC generates the next sample in two steps:

• It first generates an intermediate state by sampling from the transition distribution of starting from

• It then samples state uniformly from , the orbit of

The Orbital MCMC chain so constructed converges to the same stationary distribution as original chain and is proven to mix faster, because of the orbital moves.

## 3 Variable-Value (VV) Symmetries

Existing work has defined symmetries in terms of variable permutations. We observe that these can only represent orbits in which all states have exactly the same count of 0s and 1s. The simple reason is that any variable permutation only permutes the values in a state and hence the total count of each value remains the same. We name such type of symmetries as count symmetries.

We now give a formal definition of count symmetries for a general multi-valued graphical model, since our work applies equally to both Boolean-valued as well as any other discrete valued domains. Let denote a set of variables where each takes values from a discrete valued domain . A permutation of is a valid variable permutation if it defines a mapping between variables having the same domain. Analogously, we define a valid variable symmetry. We will say that two domains and are equicardinal if . We call such variables equicardinal variables.

###### Definition 3.1.

Given a set of variables sharing the same domain and a , computes the number of variables in taking the value in state .

###### Definition 3.2.

Given a domain , let denote the subset of all the variables whose domain is . A (valid) variable symmetry is a count symmetry if for each such subset , , .

###### Theorem 3.1.

For a graphical model , every (valid) variable symmetry is a count symmetry.

We argue here that count symmetries are restrictive; a lot more symmetry can be exploited if we simultaneously look at the values taken by the variables in a state. To illustrate this, consider a very simple graphical model with the following two formulas: (a) : (b) : . It is easy to see that there is no non-trivial symmetry here. The permutation results in a different graphical model since the two formulas have different weights. On the other hand, if we somehow could permute with and with , we would get back the same model. In this section, we will formalize this extended notion of symmetry which we refer to as variable-value symmetry (VV symmetry in short).

###### Definition 3.3.

Given a set of variables where each takes values from a domain , a variable-value (VV) set is a set of pairs such that each variable appears exactly once with each in this set where denotes the value in . We will use to denote the VV set corresponding to .

For example, given a set of Boolean variables, the VV set is given by .

###### Definition 3.4.

A Variable-Value permutation over the VV set is a bijection from onto itself.

Recall that a variable permutation applied to a state in a Boolean domain always results in a valid state. However, that may not be true in multi-valued domains, since if two variables that have different domains are permuted, it may not result in a valid state. It is also not true for all VV permutations. For example, given the state , a VV permutation defined as , results in the state [(b,1), (b,0)] which is inconsistent. Therefore, we need to impose a restriction on the set of allowed VV permutations so that they result in only valid states.

###### Definition 3.5.

We say that a VV permutation is a valid VV permutation if each variable maps to a unique variable under . In other words, is valid if, whenever and , then , . In such a scenario, we say that maps variable to .

It is easy to see that for any valid VV permutation , applying on a state always results in a valid state . It also follows that if such a maps a variable to , then and must be equicardinal.

###### Theorem 3.2.

The set of all valid VV permutations over forms a group.

Consider a graphical model specified as a set of pairs . Each feature can be thought of as a Boolean function over the variable assignments of the form . Hence, action of a VV permutation on a feature results in a new feature (with weight ) obtained by replacing the assignment by in the underlying functional form of where . Hence, application of on a graphical model results in a new graphical model where each feature is transformed through application of . We are now ready to define the symmetry of a graphical model under the application of VV permutations.

###### Definition 3.6.

We say that a (valid) VV permutation is a VV symmetry of a graphical model if application of on results back in itself.

All other definitions of the previous section follow analogously. We can define an automorphism group over VV permutations, and also define an orbit of a state under this permutation group. VV symmetries strictly generalize the notion of variable symmetries.

###### Theorem 3.3.

Each (valid) variable symmetry can be represented as a VV symmetry . There exist valid VV symmetries that cannot be represented as a variable symmetry.

Recall that a variable permutation is valid if it always maps between variables that have exactly the same domain. Say, with both variables having domains . It is easy to see that defined such that for all , will result in the same sets of symmetric states.

To prove the second part, consider a PGM with two Boolean variables and . Let there be four features , one corresponding to each of the four states, with weights given as , respectively. Then, we have a VV symmetry such that , , and . Note that maps the state to and reverse, and similarly there is a symmetry which maps to and reverse. There is no variable symmetry which can capture the symmetries induced by

since counts are not preserved. This proves the theorem. But let us for a moment define a renaming of the form

. Variable symmetries will now be able to capture the symmetries due to but will miss out on the ones due to . This is illustrative because there is no single problem formulation which can capture both the state symmetries above using the notion of variable symmetries alone.

###### Theorem 3.4.

VV symmetries preserve joint probabilities, i.e., for any VV symmetry , and state : .

### 3.1 Computing Variable-Value Symmetries

We now adapt the procedure in Section 2.1 to compute VV symmetries in multi-valued domains. For a PGM with clausal theory or conjunctive theory (as before), we construct a colored graph with a node for each variable-value pair. We also have a node for each feature, which is connected to the specific VV nodes it contains. We need to additionally impose a mutual exclusivity constraint to assert that a variable can only take exactly one of its many values. This is accomplished by adding exactly-one features with weight between all values of each variable. When assigning colors to each node, we assign all values of any variable the same color, as opposed to different values getting different colors. This allows the isomorphism solver to attempt discovering symmetries between different value nodes. As before, all features with the same weight get the same color. Figure 1 illustrates this on where Variable symmetry assigns different colors to 0 and 1 while VV-Symmetry assigns a single color (green) to both 0 and 1 assignments of all variables.

We run Saucy [5] over to compute its automorphism group via a set of permutations. These permutations are valid VV-permutations (by construction of ), and, collectively, represent a VV automorphism group of .

###### Theorem 3.5.

Any permutation that preserves graph isomorphism in is a valid VV-permutation for .

###### Theorem 3.6.

The automorphism group of colored graph constructed above computes a VV-automorphism group of graphical model .

## 4 Non-Equicardinal (NEC) Symmetries

While VV symmetries can compute non-count symmetries, they only consider mapping between equicardinal variables. In this section, we will deal with symmetries which can be present across variables having different domain sizes. Consider the following example graphical model with two features: (1): (2) : . Let and have the domains and , respectively, specified as and . Clearly, there is no VV symmetry between and since they have different domain sizes. But intuitively, the two states given as and are symmetric to each other since in each case, exactly one of the two features having the same weight is satisfied. Similarly, for and . Further, it is easy to see that the two values of and are symmetric to each other in the sense states of the form have the same probability as the states where .

We will combine the above two ideas together to exploit symmetries using domain reduction. We first identify all the equivalent values of each variable and replace them by a single representative value. In this reduced graphical model, we then identify VV symmetries and finally translate them back to the original graphical model. In the following, we will assume that we are given a graphical model defined over a set of variables where each takes values from a domain . Further, we will use the symbol to denote the cross product of the domains.

###### Definition 4.1.

Consider a variable and let . Let denote a VV permutation which maps the VV pair to and back. For all the remaining VV pairs , maps the pair back to itself. We refer to as a value swap permutation for variable .

In the example above, is a value swap permutation for which permutes the variable assignments and , and keeps the remaining variable assignments, i.e., and , fixed.

###### Definition 4.2.

A value swap permutation is a a value swap symmetry of if it maps back to itself.

In our running example, is a value swap symmetry of . Next, we show that the set of all value swap symmetries corresponding to a variable divides its domain into equivalence classes.

###### Definition 4.3.

Given a graphical model , we define a relation (swap symmetry) over the set as follows. Given , if is a value swap symmetry of .

It is easy to see that relation is an equivalence relation and hence, partitions the domain into a set of equivalence classes. Given a value , we choose a representative value from its equivalence class based on some canonical ordering. We denote this value by .

Next, we will define a reduced domain obtained by considering one value from each equivalence set.

###### Definition 4.4.

Let divide the domain into equivalence classes. We define the reduced domain as the -sized set where is the representative value for the equivalence class. We will use to denote the cross product of the reduced domains.

Revisiting our example, the reduced domain for is given as . Next we define a reduced graphical model over the reduced set of domains .

###### Definition 4.5.

Let be a graphical model with the set of weighted features . Let be a variable assignment appearing in the Boolean expression for . We construct a new feature by replacing every such expression by (and further simplifying the expression) whenever . If , then we leave the assignment in as is. The reduced is the graphical model having the set of features defined over the set of variables with having the domain .

Intuitively, in , we restrict each variable to take only the representative value from each of its equivalence classes. In our running example, the reduced graphical model is given as {: ; : } which is same as {: ; : }. Since the domains have been reduced in , we may now be able to discover mappings which were not possible earlier. For instance in our running example, we now have a VV symmetry which maps to and back.

Let the joint distributions specified by

and be given by and , respectively. The next theorem describes the relationship between these two distributions.

###### Theorem 4.1.

Let be a graphical model and let be the corresponding reduced graphical model. Consider a state specified as where each . By definition, . We claim that where is some constant independent of the specific state .

###### Proof.

Note that the reduced graphical model is emulating the distribution specified by where the space of possible variable assignments is now restricted to those belonging to the representative set, i.e., for each variable the allowed set of values is now . Therefore, can be thought of as enforcing a conditional distribution over the underlying space given the fact that assignments can now only come from cross product set . Recall that state is valid assignment in the original as well as the reduced graphical model. Therefore, we have . Here, the denominator term is simply the probability that a randomly chosen state in the original distribution belongs to the restricted domain set. Clearly, this is independent of the state and let this given as , where is a constant independent of . Then, . ∎

Above theorem gives us a recipe to discover additional symmetries across variables having different domain sizes. Let be a state in . Let denote the representative state for given as .Following steps describe a procedure to get a new state symmetric to using the idea of domain reduction.

Procedure NonEquiCardinalSym:

• Let denote the representative state for .

• Apply a VV symmetry over in the reduced graphical model. Resulting state is symmetric to u in .

• Apply a series of value swap symmetries of the form over state , one for each variable such that in , . Resulting state is symmetric to in .

###### Definition 4.6.

Let be a permutation over the state space of defined using the Procedure NonEquiCardinalSym, i.e., , where is a VV symmetry of and each is a value swap symmetry for variable in . We refer to as a non-equicardinal symmetry of .

Unlike VV symmetries whose action is defined over a VV pair, non-equicardinal symmetries directly operate over the state space. Their transformation of the underlying graphical model is implicit in the symmetries that compose them.

###### Theorem 4.2.

The set of all non-equicardinal symmetries forms a permutation group.

Finally, we need to show that action of non-equicardinal symmetries indeed results in states which have the same probability.

###### Theorem 4.3.

Let be a non-equicardinal symmetry of a graphical model . Then, .

###### Proof.

Let . Let . Since is obtained by application of VV symmetry in , we have . Using Theorem 4.1, this implies that for some constant . Hence, and have the same probability under .

Since can be obtained by application of value swap symmetries over (one for each variable), . Similarly, since is obtained by an application of value swap symmetries over , we have . Combining this with the fact that, , we get . ∎

### 4.1 Computing Non-Equicardinal Symmetries

We adapt the procedure in Section 3.1 by running graph isomorphism over a series of two colored graphs. Our first colored graph is constructed as in Section 3.1, except that all features are given different colors. This disallows any mapping between and ) for , and only allows mapping between different values of a single variable. For example, in the running example, this would determine that and are symmetric. We then retain only the representative value for each equivalent set of VV pairs, and removes nodes and edges for other values.

We take this reduced colored graph and recolor all mutual exclusivity features with a single color. We run graph isomorphism again to obtain the VV symmetries of the reduced model. These permutations together with the single-variable permutations from the previous step gives the non-equicardinal symmetries of the original model.

## 5 MCMC with VV & NEC Symmetries

Recall from Section 2.2 that variable symmetries are used in approximate inference via the Orbital MCMC algorithm. It alternates original MCMC move with an orbital move, which uniformly samples from the orbit of the current state. We first observe that the same algorithm will work for VV symmetries computed in Section 3.1, except that the orbital move will now sample from the orbit induced by VV permutations – we call this algorithm VV-Orbital MCMC.

We now consider the case of non-equicardinal symmetries in multi-valued PGMs. The main idea from Orbital MCMC remains valid – we need to alternate between original chain and orbital move. However, sampling a random state from an orbit is tricky now, because a non-equicardinal orbit may have a two-level hierarchical structure – it is an orbit over suborbits. The top level orbit is in the reduced model and is an orbit over representative states. At the bottom level, each representative state may represent multiple states via application of a variable number of value-swap symmetries.

As an example, consider the state partition in our running example, as illustrated in Figure 3. Each orbit is shown by a unique color, and suborbits by large ovals. The green orbit (top level) has two representative states (0,0) and (1,1) in the reduced model. If we make an orbital move in the reduced model, we can easily pick a representative state uniformly at random. However, the state (1,1) has a suborbit – it further represents two states in the original model, (1,1) and (1,2), via value-swap symmetries on variable . Our sampling goal is to pick uniformly at random from an orbit in the original model, which means we need to pick a representative state in the reduced model proportional to the size of suborbit it represents. Once a suborbit is picked, we can easily pick a state uniformly at random from within it. To pick a representative proportional to the size of the suborbit, we use Metropolis Hastings in the reduced model – we name the resulting algorithm NEC-Orbital MCMC.

Let represent the cardinality of the suborbit of state , i.e., the number of states for which the representative state is the same as that of : . Let represent the number of states in the orbit of which differ from at most on the value of , i.e., , where represents the value of in .

Given a Markov chain over a graphical model , a sample from to in NEC-Orbital MCMC is generated:

• Generate by sampling from transition distribution of starting from .

• Let . Sample (in ) from the orbit via a Metropolis Hastings step using the uniform proposal distribution , and desired distribution .

• Apply a series of value swap symmetries of the form over state , one for each variable , where , and is chosen uniformly at random from the set of values equivalent with with probability . This is equivalent to sampling uniformly from the suborbit of .

Notice that sampling from the proposal distribution (uniform) from an orbit is easily accomplished by Product Replacement Algorithm [20]. MH accepts or rejects the sample with an Acceptance probability , which can be computed by MH’s detailed balance equation:

 A(u′t→u′′t) = min(1,p(u′′t)∗q(u′t|u′′t)p(u′t)∗q(u′′t|u′t)) = min(1,p(u′′t)p(u′t))=min(1,c(u′′t)c(u′t))

The second equality above follows form the fact that is a uniform proposal.

###### Theorem 5.1.

The Markov Chain constructed by NEC-Orbital MCMC converges to the unique stationary distribution of original markov chain .

## 6 Experiments

We empirically evaluate our extensions of Orbital MCMC for both Boolean and multi-valued PGMs. In both settings, we compare against the baselines of vanilla MCMC, and Orbital MCMC [17]. In all orbital algorithms including ours, the base Markov chain is set to Gibbs. We build our source code on existing code of Orbital MCMC. It uses the Group Theory package Gap [7] for implementing the group-theoretic operations in the algorithms. We release our implementations for further research. 555Available at https://github.com/dair-iitd/nc-mcmc All our experiments are performed on Intel core i-7 machine. All our reported times include the time taken for computing symmetries.

Our experiments are aimed to assess the comparative value of our algorithms against baselines in those domains where a large number of symmetries (beyond count symmetries) are present. To this end, we construct two such domains. The first is a simple Boolean domain that shows how simple value renaming can affect baseline algorithms. The second is a multi-valued domain showcasing the potential benefits of non-equicardinal symmetries. The domains are:

Value-Renamed Ring Message Passing Domain: In this simple domain, people with equal number of males and females are placed in a ring structure alternately with every male followed by a female, and they pass a bit of message to their immediate neighbor over a noisy channel. If denoted the bit received by the person, then we would have a formula for PGM with weight if is a male and weight if is female. As a small modification to this domain, we randomly rename some s to mean bit received by that agent, and change all formulas analogously. All the symmetries in the original ring should remain after this renaming. Our experiments test the degree to which the various algorithms are able to identify these.

Student-Curriculum Domain: In this multi-valued domain, there are students taking courses from areas (e.g., theory, architecture, etc.). Each area has a variable number of courses numbered 1 to . Each student has to fulfill their breadth requirements by passing one course each from any two areas. A student has no specific preference to which of the courses they take in an area. However, each student has a prior seriousness level, which determines whether they will pass any

course. This scenario is modeled by defining a random variable

, which is a multi-valued variable where value denotes that student failed the course in the area , and value denotes which course they passed. The weight for failing depends on the student but not on area. Finally, the variable denotes that completed their requirements by passing courses from areas and .

The Curriculum domain is interesting, because, for a given , various values of other than 0 are all symmetric for all areas. And once, all s are converted to a representative value in the reduced model, all areas become symmetric for a student.

We compare different algorithms by plotting the KL-divergence of true marginals and an algorithm’s marginals with time. True marginals are calculated by running Gibbs sampling for a sufficiently large duration of time. Figure 4 compares VV-Orbital MCMC with baselines on the message passing domain. The dramatic speedups obtained by VV-Orbital MCMC underscores Orbital MCMC’s inability to identify the huge number of variable-renamed symmetries present in this domain, whereas VV-Orbital MCMC is able to benefit from these tremendously.

Before describing results on Curriculum domain, we first highlight that, out of the box, Orbital MCMC cannot run on this domain, because both its theory and implementation have only been developed for Boolean-valued PGMs. To meaningfully compare against Orbital MCMC, we first binarize the domain, by converting each multi-valued random variable into many Boolean variables , one for each value

. We need to add an infinite-weighted exactly-one constraint for each original variable before giving it to Orbital MCMC. A careful reader may observe that this binarization is already very similar to the VV construction of Section

3, but without non-equicardinal symmetries. Thus, this is already a much stronger baseline than currently found in literature.

Figure 5 shows the results on this domain. NEC-Orbital MCMC outperforms both baselines by wide margins. Orbital MCMC does improve upon vanilla Gibbs since it is able to find that all s for different s are equivalent, however, it is unable to combine them across areas.

In domains where symmetries beyond count symmetries are not found, the overhead of our algorithms is not significant, and they perform almost as well as (binarized) Orbital MCMC (e.g., see Figure 4). This is also corroborated by the fact that the time for finding symmetries is relatively small compared to the time taken for actual inference on both the domains. Specifically, this time is 0.250 sec and 0.009 sec for curriculum and ring domains, respectively.

In summary, both VV-Orbital MCMC and NEC-Orbital MCMC are useful advances over Orbital MCMC.

## 7 Conclusion and Future Directions

Existing lifted inference algorithms capture only a restricted set of symmetries, which we define as count symmetries. To the best of our knowledge, this is the first work that computes symmetries beyond count symmetries. To compute these non-count symmetries, we introduce the idea of computation over variable-value (VV) pairs. We develop a theory of VV automorphism groups, and provide an algorithm to compute these. These can compute equicardinal non-count symmetries, i.e., between variables that have the same cardinality. An extension to this allows us to also compute non-equicardinal symmetries. Finally, we provide MCMC procedures for using these computed symmetries for approximate inference. In particular, the algorithm to use non-equicardinal symmetries requires a novel Metropolis Hastings extension to existing Orbital MCMC. Experiments on two domains illustrate that exploiting these additional symmetries can provide a huge boost to convergence of MCMC algorithms.

We believe that many real world settings exhibit VV symmetries. For example, in the standard Pott’s model used in Computer Vision

[12], the energy function depends on whether the two neighboring particles take the same value or not, and not on the specific values themselves (hence, 00 would be symmetric to 11). Exploring VV symmetries in the context of specific applications is an important direction for future research.

We will also work on extending the theoretical guarantees of variable symmetries [17] to VV symmetries. Several notions of symmetries already exist in the Constraint Satisfaction literature [3]. It will be interesting to see how our approach can be incorporated into the existing framework of symmetries in CSPs.

## Acknowledgements

We thank Mathias Niepert for his help with the orbital-MCMC code. Ankit Anand is being supported by the TCS Research Scholars Program. Mausam is being supported by grants from Google and Bloomberg. Both Mausam and Parag Singla are being supported by the Visvesvaraya Young Faculty Fellowships by Govt. of India.

## References

• [1] Ankit Anand, Aditya Grover, Mausam, and Parag Singla. Contextual Symmetries in Probabilistic Graphical Models. In IJCAI, 2016.
• [2] H. Bui, T. Huynh, and S. Riedel. Automorphism Groups of Graphical Models and Lifted Variational Inference. In UAI, 2013.
• [3] David Cohen, Peter Jeavons, Christopher Jefferson, Karen E. Petrie, and Barbara M. Smith. Symmetry Definitions for Constraint Satisfaction Problems. Constraints, 11(2):115–137, 2006.
• [4] James Crawford, Matthew Ginsberg, Eugene Luks, and Amitabha Roy. Symmetry-breaking predicates for search problems. KR, 96:148–159, 1996.
• [5] Paul T Darga, Karem A Sakallah, and Igor L Markov. Faster symmetry discovery using sparsity of symmetries. In Design Automation Conference, 2008.
• [6] R. de Salvo Braz, E. Amir, and D. Roth. Lifted First-Order Probabilistic Inference. In IJCAI, 2005.
• [7] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.7.9, 2015.
• [8] V. Gogate and P. Domingos. Probabilisitic Theorem Proving. In UAI, 2011.
• [9] V. Gogate, A. Jha, and D. Venugopal. Advances in Lifted Importance Sampling. In AAAI, 2012.
• [10] K. Kersting, B. Ahmadi, and S. Natarajan. Counting Belief Propagation. In UAI, 2009.
• [11] A. Kimmig, L. Mihalkova, and L. Getoor. Lifted Graphical Models: A Survey. Machine Learning, 99(1):1–45, 2015.
• [12] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
• [13] Timothy Kopp, Parag Singla, and Henry Kautz. Lifted Symmetry Detection and Breaking for MAP Inference. In NIPS, 2015.
• [14] H. Mittal, P. Goyal, V. Gogate, and P. Singla. New Rules for Domain Independent Lifted MAP Inference. In Proc. of NIPS-14, pages 649–657, 2014.
• [15] M. Mladenov, B. Ahmadi, and K. Kersting. Lifted Linear Programming. In AISTATS, 2012.
• [16] M. Mladenov, K. Kersting, and A. Globerson. Efficient Lifting of MAP LP Relaxations Using k-Locality. In AISTATS, 2014.
• [17] Mathias Niepert. Markov Chains on Orbits of Permutation Groups. In UAI, 2012.
• [18] Mathias Niepert.

Symmetry-Aware Marginal Density Estimation.

In AAAI, 2013.
• [19] J. Noessner, M. Niepert, and H. Stuckenschmidt. RockIt: Exploiting Parallelism and Symmetry for MAP Inference in Statistical Relational Models. In AAAI, 2013.
• [20] I. Pak. The Product Replacement Algorithm is Polynomial. In Foundations of Computer Science, 2000.
• [21] D. Poole. First-Order Probabilistic Inference. In IJCAI, 2003.
• [22] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. Lifted MAP inference for Markov Logic Networks. In AISTATS, 2014.
• [23] P. Singla and P. Domingos. Lifted First-Order Belief Propagation. In AAAI, 2008.
• [24] P. Singla, A. Nath, and P. Domingos. Approximate Lifting Techniques for Belief Propagation. In AAAI, 2014.
• [25] G. Van den Broeck and M. Niepert. Lifted probabilistic inference for asymmetric graphical models. In AAAI, 2015.
• [26] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted Probabilistic Inference by First-order Knowledge Compilation. In IJCAI, 2011.
• [27] Guy Van den Broeck and Adnan Darwiche. On the complexity and approximation of binary evidence in lifted inference. In NIPS, 2013.
• [28] D. Venugopal and V. Gogate. On Lifting the Gibbs Sampling Algorithm. In NIPS, 2012.