Contextual Symmetries in Probabilistic Graphical Models

06/30/2016 ∙ by Ankit Anand, et al. ∙ Indian Institute of Technology Delhi Stanford University 0

An important approach for efficient inference in probabilistic graphical models exploits symmetries among objects in the domain. Symmetric variables (states) are collapsed into meta-variables (meta-states) and inference algorithms are run over the lifted graphical model instead of the flat one. Our paper extends existing definitions of symmetry by introducing the novel notion of contextual symmetry. Two states that are not globally symmetric, can be contextually symmetric under some specific assignment to a subset of variables, referred to as the context variables. Contextual symmetry subsumes previous symmetry definitions and can rep resent a large class of symmetries not representable earlier. We show how to compute contextual symmetries by reducing it to the problem of graph isomorphism. We extend previous work on exploiting symmetries in the MCMC framework to the case of contextual symmetries. Our experiments on several domains of interest demonstrate that exploiting contextual symmetries can result in significant computational gains.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

An important approach for efficient inference in probabilistic graphical models exploits symmetries in the underlying domain. It is especially useful for statistical relational learning models such as Markov logic networks [Richardson and Domingos2006]

, which exhibit repeated sub-structures – many objects are indistinguishable from each other and their associated relations have identical probability distributions.

Lifted inference algorithms (see [Kimmig et al.2015] for a survey) exploit this phenomenon by grouping symmetric states (variables) into meta-states (meta-variables) and performing inference in this reduced (lifted) graphical model.

Early approaches to lifted inference devised first order extensions of propositional inference algorithms. These include approaches for lifting exact inference algorithms such as variable elimination [Poole2003, de Salvo Braz et al.2005], weighted model counting [Gogate and Domingos2011], knowledge compilation [Van den Broeck et al.2011], as well as lifting approximate algorithms such as belief propagation [Singla and Domingos2008, Kersting et al.2009, Singla et al.2014], Gibbs sampling [Venugopal and Gogate2012] and importance sampling [Gogate et al.2012]. In all these approaches, the lifting technique is tied to the specific algorithm being considered. More recently, another line of work [Jha et al.2010, Bui et al.2013, Niepert and Van den Broeck2014, Sarkhel et al.2014, Kopp et al.2015] has started looking at the notion of symmetry independent of the inference technique. In several cases, these symmetries are compactly represented using permutation groups. The computed symmetries have been used downstream for lifting existing algorithms such as variational inference [Bui et al.2013]

, (integer) linear programming

[Noessner et al.2013, Mladenov et al.2014]

, and Markov chain Monte Carlo (MCMC)

[Niepert2012, Van den Broeck and Niepert2015], which is our focus.

A key shortcoming of existing algorithms is that they only identify and exploit sets of variables (states) that are symmetric unconditionally. Our goal is to extend the notion of symmetries to contextual symmetries, sets of states that are symmetric under a given context (variable-value assignment). Our proposal is inspired by the extension of conditional independence to context-sensitive independence [Boutilier et al.1996], and analogously extends unconditional symmetries to contextual. As our first contribution, we develop a formal framework to define contextual symmetries. We also present an algorithm to compute contextual symmetries by reducing the problem to graph isomorphism.

Figure 1 illustrates an example of contextual symmetries. A couple A and B may like to go to a romantic movie. They are somewhat less (equally) likely to go alone compared to when they go together. However if the movie is a thriller, A may be less interested in going by herself, but B may not change his behavior. Hence, A and B are symmetric to each other if the movie is romantic, but not symmetric if the movie is a thriller. We will call the A and B contextually symmetric conditioned on the movie being romantic.

Figure 1: Illustration of (a) Contextual Symmetry (with Genre=“Romantic”) (b) Orbital Symmetry in the Movie Network.

Finally, our paper extends the line of work on Orbital MCMC [Niepert2012] – a state-of-the-art approach to exploit unconditional symmetries in a generic MCMC framework. Orbital MCMC achieved reduced mixing times compared to Gibbs sampling in domains where such symmetries exist. We design Con-MCMC, an algorithm that uses contextual symmetries within the MCMC framework. Our experiments demonstrate that on various interesting domains (relational and propositional), where contextual symmetries may be present, Con-MCMC can yield substantial gains compared to Orbital MCMC and Gibbs Sampling. We also release a reference implementation of Con-MCMC sampler for wider use.111

2 Background


be a finite set of discrete random variables. For ease of exposition, we consider Boolean random variables although our analysis extends more generally to n-ary random variables. We denote

to represent a state. A graphical model over can be represented as the set of pairs where is a formula (feature) over a subset of variables in and is the associated weight [Koller and Friedman2009]. This is the representation used in several existing models such as Markov logic networks [Domingos and Lowd2009].

2.1 Symmetries in Graphical Models

Some states may be symmetric; thus, they will have the same joint probability. This fact can be exploited in inference algorithms. To define symmetries, we make use of the formalism of automorphism groups, which are generic representations for symmetries between any set of objects. Automorphism groups over graphical models are defined using another algebraic structure called a permutation group.

A permutation is a bijection from the set onto itself. We use to denote the application of to the element . We also overload by denoting as the permutation of a state in which each component random variable is permuted using .

A permutation group is a set of permutations which contains the identity element, has a unique inverse for every element in the set, and is closed under composition operator.

Following previous work [Niepert2012] we define symmetries and automorphism groups in graphical models as follows:

Definition 2.1.

A symmetry of a graphical model over the set is represented as the permutation of the variables in that maps back on to itself, i.e. results in the same set of weighted formulas.

Definition 2.2.

An automorphism group of a graphical model is defined as the permutation group () over such that each is a symmetry of .

This definition of an automorphism group of a graphical model is analogous to that of an automorphism group of an edge-colored graph, where variables in act as vertices in graph, features act as edges (or hyperedges) and weights act as colors on edges. We next define an orbit of a state.

Definition 2.3.

The orbit () of a state under the automorphism group is defined as all the states that can be reached by applying a symmetry on the variables of , i.e., s.t. .

Henceforth, we will refer to these unconditional symmetries of a graphical model as orbital symmetries. Let be the probability distribution defined by model over the states.

Theorem 2.1.

If is an automorphism group of , then , i.e. orbital symmetries of a graphical model are probability preserving transformations.

Therefore, the automorphism group for a graphical model is also referred to as an automorphism group for the underlying probability distribution. The symmetries of a graphical models as defined above can be obtained via solving a graph automorphism problem. Though the problem is not known to be in P or NP-complete,222A quasipolynomial time algorithm [Babai2015] has been proposed recently for the related graph isomorphism problem which remains to be verified. efficient solutions can be obtained using the software such as Saucy and Nauty  [Darga et al.2008, McKay and Piperno2014].

2.2 Markov chain Monte Carlo

Markov chain Monte Carlo (MCMC) is a popular approach for approximate inference in graphical models. A Markov chain is set up over the state space such that its stationary distribution is the same as the underlying probability distribution. An Orbital Markov chain [Niepert2012] exploits the orbital symmetries of a model by setting up a Markov chain combining the original MCMC moves with orbital moves. Let denote an orbital Markov chain and be the corresponding original Markov chain. Then, given the current state , the next state in is sampled as follows:

  • original move: sample an intermediate state from based on the transition probability in .

  • orbital move: sample a next state uniformly at random from the orbit .

Orbital MCMC converges to the same stationary distribution as the original MCMC and is shown to have significantly faster convergence properties.

3 Contextual Symmetries

Our work proposes the novel notion of contextual symmetries – symmetries that only hold under a given context. We now extend the definitions of the previous section to their contextual counterparts. First we define a context.

Definition 3.1.

A context is a partial assignment, i.e., a set of pairs , where and , and no is repeated in the set.

For example, in Figure 1, we can define a context (Genre, “Romantic”). We refer to a context as a single variable context if there is only one element in the context set.

We say that a variable appears in a context if there is a pair . Given a context , we will use to denote the subset of variables of which appear in . We will use to denote the complement of this set. Given a state , we will use to denote the value of in state . We say that a state is consistent with the context iff we have .

In order to define contextual automorphism, we will need to define the notion of a reduced model.

Definition 3.2.

Given a graphical model , and a context , the reduced model is defined as the new graphical model obtained by substituting in each formula for all and keeping the original weights .

Note that is defined over the set . As an example, if the model is represented by the formulas {(, ), (, )}, the reduced model under the single variable context will be {(, ) and (, )}. In the factored form representation, reduction by a context corresponds to fixing the values of the context variables in the potential table. E.g., in Figure 1 given the context ”Romantic”, we reduce the factor to the bottom four rows of the potential table where Genre has value ”Romantic”. We are now ready to define a contextual symmetry of a graphical model.

Definition 3.3.

A contextual symmetry of a graphical model under context is represented as a permutation of variables in s.t. a) , i.e. variables in the context are mapped to themselves, and b) an orbital symmetry of the reduced model such that , i.e. mapping of the remaining variables defines an orbital symmetry of the reduced graphical model under context .

For example, in Figure 1, let a permutation be: . is a contextual symmetry under the context (Genre, “Romantic”), but not under the context (Genre, “Thriller”).

Definition 3.4.

A contextual automorphism group of a graphical model under context is defined as a permutation group () over , such that each is a contextual symmetry of under context .

Definition 3.5.

The contextual orbit of a state under the contextual automorphism group (given the context ) is the set of those states which are consistent with and can be reached by applying to , i.e., s.t. .

Note that must be consistent with for it to have a non-empty contextual orbit. Analogous to orbital symmetries, contextual symmetries are also probability preserving.

Theorem 3.1.

A contextual symmetry of under context is probability preserving , as long as is consistent with .

3.1 Relationship with Related Concepts

The set of contextual symmetries subsumes that of orbital symmetries – any orbital symmetry is a contextual symmetry under a null context . The two notions are even more related, as the following two lemmas show. Let be the set of variables that map onto itself in a permutation , i.e. .

Lemma 1.

An orbital symmetry is a contextual symmetry under a context if .

Lemma 2.

Let . If a permutation is a contextual symmetry of under all possible contexts where , then is an orbital symmetry of .

We now distinguish the notions of context and contextual symmetries from two other related concepts. First, a context is different from evidence. External information in the form of evidence modifies the underlying distribution represented by the graphical model. In contrast, a context has no effect on the underlying distribution.

Second, it might be tempting to confuse contextually symmetric states with contextually independent states [Boutilier et al.1996]. In the example of Figure 1(a) given Genre=“Thriller”, A and B are contextually independent, i.e., probability of A does not change depending on B. For this context A and B are non-symmetric. For Genre=“Romantic” A and B are symmetric but not independent.

Finally, in Section 7, we discuss the relationship between contextual symmetries and the recent notion of conditional decomposability [Niepert and Van den Broeck2014].

3.2 Computing contextual symmetries

Computing contextual symmetries for under a context is equivalent to computing orbital symmetries on the reduced model . To compute orbital symmetries we adapt the procedure from Niepert niepert12. Following Niepert, we describe the construction when each is a clause, though it can be extended to the more general case.

Niepert’s procedure creates a colored graph, with two nodes corresponding to every variable (one each for the positive and negative state), and one node for every formula . Edges exist between the positive and negative states of every variable, and also between the formula nodes and the variable nodes (either positive or negative) appearing in the formula. Finally, colors are assigned to nodes based on the following criteria: (a) every positive variable node is assigned a common color, (b) every negative variable node is assigned a different common color, and (c) every unique formula weight is assigned a new color. The formula nodes inherit the color associated with their weight .

This color graph is then passed through a graph isomorphism solver (e.g., Saucy), which computes the automorphism group for . This is equivalent to computing contexual automorphism group for under :

Theorem 3.2.

The automorphism group for the color graph of the reduced graphical model along with an identity mapping of the context variables gives a contextual automorphism group of under .

Note that in case we have any evidence available, the reduced model over which we induce a colored graph corresponds to . This is in contrast with original Niepert’s procedure, where evidence nodes are not removed from the color graph and instead act as additional formulas for the original graphical model with infinity weights. This elimination of evidence nodes helps discover many more symmetries in the corresponding color graph while still preserving correctness. For example, if the model is represented by formulas {(, ), (, )}, and evidence is (), and become symmetric only if is eliminated from the color graph, and not in Niepert’s procedure.

4 Contextual MCMC

We now extend the Orbital MCMC algorithm from Section 2.2 so that it can exploit contextual symmetries; our algorithm is named Con-MCMC, and is parameterized by . Orbital MCMC reduces mixing times over original MCMC, because it can easily transition between high probability states falling in the same orbit, which may otherwise be separated by low probability regions. Unfortunately, as Figure 1 demonstrates, a domain may have little orbital symmetry, but still important contextual symmetry. Con-MCMC() exploits these for inference.

We are given a set of context variables (more on this later). Let denote the set of all possible contexts involving all the variables in . Overloading the notation, we will use to denote the (unique) context in consistent with state . We compute contextual symmetries under each context using the algorithm from Section 3. We are also given an original regular Markov chain that converges to the desired probability distribution . Con-MCMC() runs a Contextual Markov Chain that samples a state from as follows:

  1. Gibbs-orig move: We sample an intermediate state from the current state as:
      (a) with probability (Gibbs): flip a random context variable in using Gibbs transition probability.
      (b) with probability - (original): make the move from based on the transition probability in .

  2. con-orbital move: Let be the context consistent with . Let denote the contextual orbit of under the context . Sample a state uniformly at random from .

When our algorithm reduces to a direct extension of Orbital MCMC, where in second step, we sample uniformly from a contextual orbit instead of the original orbits. In the more interesting case of , we enable the Markov chain to move more freely between different contexts using a Gibbs flip over the context variables. This Gibbs transition helps us carry over the effect of symmetries exploited under one context (via the orbital moves in step 2) to others. This can be especially useful when symmetries are unevenly distributed across multiple contexts (as also confirmed by our experiments).

In order to sample a state uniformly at random from a contextual orbit, we use the product replacement algorithm [Pak2000] as described and used by Niepert niepert12. Recall that since we are working with contextual permutations, the context variables are mapped to themselves and we are guaranteed to not change the context. Next, we show that Con-MCMC() converges to the desired stationary distribution . We need the following lemma.

Lemma 3.

Let and be two Markov chains defined over a finite state space with transition probability functions and , respectively, such that is a stationary distribution for both , i.e., , . Further, let be regular. Then, the Markov chain with the transition function is also regular and has a unique stationary distribution for .

Let refer to the family of Markov chains constructed using only step 1 of our algorithm i.e. no orbital moves. is regular with the stationary distribution . Further, each individual Gibbs flip over a variable satisfies stationarity with respect to the underlying distribution [Koller and Friedman2009]. Hence, using Lemma 3, is regular with as a stationary distribution.

Theorem 4.

The family of contextual Markov chains constructed using Con-MCMC() converges to the stationary distribution of original Markov chain for any choice of context variables and .


Let be the stationary distribution of . Since is regular it is easy to see that is also regular (there is always a non-zero probability of coming back to the same state in an orbital move). Therefore, converges to a unique stationary distribution. Then, we only need to show that is the stationary distribution of . Let denote the set all of all the states. For , let and represent the transition probability functions of and , respectively. In order to show that also converges to , we need to show that:


The RHS of the above equation can be written as:

Here, recall that denotes the contextual automorphism group for the (unique) context consistent with the state , and denotes the corresponding orbit. Step 1 above follows from the definition of contextual orbital move. Step 4 follows from the stationarity of . Step 5 follows from the fact that all the states in the same contextual orbit have the same probability (Theorem 3.1). ∎

5 Experimental Evaluation

Our experiments evaluate the use of contextual symmetries for faster inference in graphical models. We compare our approach against Orbital MCMC, which is the only available algorithm that exploits symmetries in a general MCMC framework. We also compare with vanilla Gibbs sampling, which does not exploit any symmetries. We implement Con-MCMC() as an extension of the original Orbital MCMC implementation333 available in the GAP language [GAP2015]. The existing implementation uses Saucy [Darga et al.2008] for graph isomorphism and Gibbs sampler as the base Markov chain. We experiment on two versions each of two different domains, with context variables pre-specified. We next describe our domains.

5.1 Domains and Methodology

Figure 2: (a) Con-MCMC effectiveness increases tremendously with increasing domain sizes. Note that y-axes are on different scales. (c) New orbital symmetries are created with increasing evidence, leading to improved performance of Orbital MCMC. (b, d) Curves for Sports Network (Single) and Y & O (Single) respectively – Con-MCMC(0.01) performs the best and vastly outperforms Con-MCMC(0).

Sports Network: This Markov network models a group of students who may enter a future sport league, which could be for one of two sports, badminton or tennis (modeled as the variable ). Each student belongs to one of the dorms on campus. The league accepts both singles as well as doubles entries. For each student , the domain has a variable for playing singles, . For each pairs of students , coming from the same dorm, we have a variable indicating that they will play doubles together, . Multiple students (in the same dorm) train together in training groups, which are different for the two sports. A student’s participation in the league for a given sport is (jointly) influenced by the participation of other students in her training group for that sport. Moreover, if two students decide to play singles, it increases the probability that they may also team up to play doubles independent of their training groups. In this domain, different subsets of students in a dorm (based on their training groups) are symmetrical to each other depending upon , which becomes a natural choice for the context. In our experiments, we use training groups of students and dorms with students each.

Young and Old: This domain is modeled as an MLN and is an extension of the Friends and Smokers (FS) [Singla and Domingos2008] network. has a propositional variable determining whether we are dealing with a population of youngsters or older folks. For every person in the domain, we have predicates , and . We also have the predicate

for every pair of persons. We have rules stating that young persons are more likely to smoke and older people are less likely to smoke. Similarly, we have rules stating that young people are more likely to eat out and old people are less likely to eat out. When the population is young, everyone has the same weight for the smoking rule and slightly different weight (sampled from a Gaussian) for eating out. When the population is old, everyone has a slightly different weight (again sampled from a Gaussian) for smoking and the same weight for eating out. As in the original FS, we have rules stating smoking causes cancer and friends have similar smoking habits. We also have rules stating that cancer and friends variables have low prior probabilities. In this domain, smoking, cancer and friends variables are symmetric to each other when population is young, whereas all eating out variables are symmetric when the population is old. Clearly,

is a natural choice for context in this domain.

An important property of both these domains is that different contextual symmetries exist for both assignments of the respective context variables. To test the robustness of Con-MCMC we further modify these domains so that contextual symmetries exist only on one of the two assignments of context variable. In (Single), we give (slightly) different weights to variables when is false, i.e., symmetries exist only when is true. In Sports Network (Single), variables involved in a training group are symmetric only for tennis; for badminton, each in a group behaves (slightly) differently. We refer to these two variations as the Single side versions of the original domains.

For these four domains, we plot run time vs. the KL-divergence between approximate marginal probabilities computed by each algorithm and the true marginals.444computed by running a Gibbs sampler for sufficiently long time. For both Orbital MCMC and Con-MCMC

, the time to compute symmetries is included in the run time. For each problem we run 20 iterations of each algorithm and take the mean of the marginals to reduce variance of the measurements. We also plot the 95% confidence intervals. We show

Con-MCMC results for and

, which was chosen based on performance on smaller problem sizes. We perform various control experiments by varying the size of domains, amount of available evidence, marginal posterior probability of the context variable and the value of

parameter. All the experiments are run on a quad-core Intel i-7 processor.

5.2 Results

Figures 2 and 3 show the representative graphs across multiple domains and varying experimental conditions. We find that Con-MCMC(0.01) almost always performs the best or at par with the best of other three algorithms. Con-MCMC(0) usually performs better than Gibbs and Orbital MCMC, but its performance can be closer to Gibbs or Con-MCMC() depending upon the experimental setting. Orbital MCMC does not usually offer much advantage over Gibbs, primarily because these domains don’t have many orbital symmetries. For Sports Network, there are no orbital symmetries at all; Orbital MCMC avoids the overhead of the orbital move and performs at par with Gibbs. For Orbital MCMC finds a few symmetries, which don’t particularly help in reducing mixing time. However, it still incurs the overhead of orbital moves, leading to a significantly worse performance compared to Gibbs.

Variation with Domain Size: Figure 2(a) compares the algorithms as we increase the domain size for the Sports network from to students. The overall trends remain similar, i.e., Con-MCMC algorithms outperform Gibbs and Orbital MCMC by huge margins. A closer look reveals that the y-axes are at different scales for the three curves – the relative edge of Con-MCMC algorithms increases substantially with larger domain sizes.

Variation with Amount of Evidence: Figure 2(c) compares the performance of the algorithms as we vary the amount of (random) evidence available from to in the domain on predicates other than using a domain size of . As earlier, Con-MCMC algorithms outperform others. We observe that the relative gain of Con-MCMC algorithms with Orbital MCMC decreases with increasing evidence (for 30% evidence Orbital MCMC overlaps with Gibbs, for 60% evidence, Orbital MCMC overlaps with Con-MCMC). We believe that this is due the fact that more evidence tends to disconnect the network introducing additional symmetries which can be exploited by Orbital MCMC. Nevertheless, Con-MCMC algorithms perform at least as well as Orbital MCMC for all values of evidence that we tested on.

Figure 3: Con-MCMC effectiveness increases in Single Side Symmetry cases as we increase the marginal of context variable to the side having symmetry from 0.09 to 0.91. Con-MCMC(0.01) provides significant gains even at very low posterior values. Con-MCMC(0) performance improves with increase in the marginal.
Figure 4: =0.01 and =0.1 work best across both domains. Very high as well as very low values of lead to poor performance.

Variation across Versions of a Domain: Figure 2(b) and 2(d) show the plots for the Single side versions of Sports network and , respectively. We observe a significant difference in the performance of the two Con-MCMC algorithms. The reason is subtle. Since symmetries exist only on one side, that side mixes quickly for Con-MCMC(0); however, the other side does not mix as well, because of lack of symmetries. Con-MCMC() for mitigates this by upsampling the flip of the context variable. This enables the rapid mixing on symmetry side to regularly influence the non-symmetry side (via Gibbs move), which leads to a faster mixing on that side too. Nevertheless, Con-MCMC(0) is still able to outperform both Gibbs sampling as well as Orbital MCMC by exploiting the single sided symmetry. The posterior of symmetry side is 0.33 in Sports network (Figure 2(b)) and 0.37 in (Figure 2(d)).

We also observe in the first graph of Figure 2(c), that Con-MCMC(0) performs somewhat worse than Con-MCMC(0.01). We believe that the reason for performance in this two-sided symmetry domain is similar to the single-sided case. In , when true, substantial symmetries may exist due to smoking, cancer and friends variables. However, on the other side, the symmetries are far less (only for eating out variables). This implies that Con-MCMC(0) will have much faster mixing on one side, but not on the other. On the other hand, Con-MCMC() will upsample context variable flips and allow the stronger symmetry side to influence the other. In general, Con-MCMC() performance is highly robust to varieties of symmetric and asymmetric domains.

Variation with Posterior of Context Variable: We investigate performance on Single-sided domains further by varying the posterior marginal probability of the context variable. Figure 3 shows the results for Sports network (Single) with marginal probability of tennis varying from to . Note that tennis side is the side where symmetries exist.

The graphs show an interesting trend. Even for very low marginals, Con-MCMC(0.01) is able to benefit from one sided symmetries. Since the marginal is low we expect any MCMC algorithm to spend most of its time on the non-symmetry side. However, Con-MCMC(0.01) will still go back and forth several times between two sides; each flip to symmetry side and back will help in potentially reaching a different region of the state space leading to better mixing on the non-symmetry side.

Not surprisingly, Con-MCMC(0) does not perform as well for low marginals – it does not get to switch contexts as often, and ends up mixing slowly on the important, non symmetry side. As marginal of the context variable increases, the relative performance of Con-MCMC(0) improves substantially. As marginal becomes high (), both Con-MCMC samplers end up sampling mostly on the symmetry side, and can reap benefits of symmetries similarly. We also conduct these experiments for the domain and observe a very similar behavior.

Variation with Parameter: Figure 4 shows the performance of Con-MCMC() for different values of in the range 0.001 to 0.5 for both Sports network (single) and (single) domains. Our algorithm is fairly robust for values of between and . Its performance starts to degrade for very low as well as very high values of . For very low values of , algorithm’s behavior approaches that of Con-MCMC(0). For very high values of , the algorithm spends too much time flipping the context variable and not enough time exploring the state space, resulting in poor performance.

Overall, we conclude that Con-MCMC(0.01) is robust to various experimental settings and obtains the best results significantly outperforming Orbital MCMC and Gibbs. This underscores the importance of our contextual symmetry framework for probabilistic inference.

6 Discussion and Future Work

While our work extends the capability of lifted inference to a wider range of settings, it also raises important questions. In many cases, the set of context variables is known from domain knowledge or domain description especially in relational models. An open question is how to automatically compute a good set

, since trying all possible sets can be prohibitive. We have designed a heuristic approach that greedily chooses the most useful context variable every iteration and adds it to the context set. It uses a few initial rounds of the color passing algorithm 

[Kersting et al.2009] to approximate the amount of additional symmetry obtained by making a variable part of the context. More experiments are needed to assess the effectiveness of our approach.

Another important observation is that the set of contextual symmetries may not monotonically increase with increasing context size. This may happen if additional context variables break existing symmetries, since context variables are forced to undergo identity mapping. Then, how do we design algorithms so that their effectiveness monotonically increases with larger contexts in all cases? This is an important direction for future work.

Another question concerns the robustness of performance of symmetry-based inference algorithms. Over the course of our experiments, we tested our algorithms on several domain variations. While in most cases Con-MCMC(0.01) and Con-MCMC(0) performed much better than Gibbs, in rare cases, the performance was worse too. Further investigations revealed two main sources of lower performance.

The first and more prominent cause is the trade-off between mixing speed and sampling time. Because all symmetry-based algorithms run an expensive product replacement algorithm [Pak2000] to sample from an orbit, next samples for Con-MCMC (and Orbital MCMC) are generated much slower than Gibbs. In domains where symmetries are prevalent, this slower sampling is mitigated by rapid mixing, but in other domains, it could result in a worse performance. An intelligent wrapper that guesses whether to exploit symmetries or not in a given domain will be crucial for developing a robust inference algorithm. The second reason for lower performance is subtle. Con-MCMC() is able to exploit contextual symmetries (even single-sided) in a wide variety of settings, but in one situation it can lose to other algorithms. This happens when the context variable has a huge Markov blanket, so much so that one Gibbs move that flips the context variable becomes overbearingly costly. Since Con-MCMC() upsamples flips of context variables, this can cause significant loss to overall performance, even though the mixing is much faster with respect to number of samples.

Another observation relates to the effect of evidence in a domain. Evidence can both help and hurt symmetries in an inference problem. In some cases, evidence can break existing symmetries and reduce the relative gain of symmetry-based algorithms. In other cases, evidence can break edges and create new symmetries and help them. While in our experiments, we didn’t find Con-MCMC(0.01) to be ever worse than Gibbs due to additional evidence, such pathological cases can be constructed.

It would be interesting to see how algorithms other than MCMC can benefit from our contextual symmetry framework. In the future, we would also like to explore approximate contextual symmetries that could make our contribution applicable to several other domains, where exact contextual symmetries cannot be found. We would also like to theoretically analyze the mixing time of Con-MCMC.

7 Related Work

Some papers have discussed methods for computing symmetries under a given evidence [Van den Broeck and Darwiche2013, Venugopal and Gogate2014, Kopp et al.2015]. As discussed in Section 3.2, the algorithm for computing contextual symmetries is closely related to computing evidence-based symmetries. The main difference is in the way we use these symmetries for downstream inference.

While our general notion of contextual symmetries is novel it has connections to a few recent works. The RockIt system [Noessner et al.2013] identifies contextual symmetries in a very special case in which the domain theory has a set of disjunctive clauses of a specific kind where each is a single literal (or its negation). For this setting, is a natural context and symmetries among s can be exploited. RockIt does not provide any general notion beyond this special case. It constructs a reduced ILP for MAP inference instead of marginal inference, as in our case.

There is recent work on exploring connections between the concept of exchangeability of random variables and tractability of probabilistic inference [Niepert and Van den Broeck2014]. Our contextual symmetries can be seen as a generalization of their conditional decomposability to conditional partial decomposability where the sufficient statistics are precisely the contextual orbits. Whereas Niepert and Van den Broeck niepert&broeck14 primarily focus on developing the theory for conditional decomposability, we propose and additionally connect this with the symmetries present in the structure of a graphical model. Further, unlike them we develop an algorithm to compute these conditional decompositions (contextual symmetries in our case) and show how they can be used in practice for efficient probabilistic inference.

As discussed in Section 1, our work builds upon the recent literature on lifted inference that pre-computes explicit domain symmetries using automorphism groups [Niepert2012, Bui et al.2013, Van den Broeck and Niepert2015] and exploits them for efficient inference. Our work is most closely related to Orbital MCMC [Niepert2012]. Our experimental results shows the value of Con-MCMC over Orbital MCMC that does not incorporate contextual symmetries.

Our contextual symmetries are also analogous to conditional symmetries in constraint satisfaction problems (CSPs) [Gent et al.2005, Walsh2006, Gent et al.2007]. CSP symmetries are called conditional if symmetry groups exist only in a sub-problem of the original CSP, i.e., in a CSP with one or more additional constraints. The CSP problem setting and their actual manifestation in algorithms are quite different from lifted inference, but their definition and use of conditional symmetries is in the same spirit as ours.

8 Conclusions

We present a novel framework for contextual symmetries in probabilistic graphical models. Contextual symmetries generalize and extend previous notions of orbital symmetry. Given any context, we can efficiently compute these symmetries by reducing it to the problem of colored graph isomorphism. While our framework is independent of any inference algorithm, we illustrate its applicability by proposing Con-MCMC, an MCMC approach that exploits contextual symmetries. Our experiments on several domains validate the efficacy of Con-MCMC, where it outperforms existing state-of-the-art techniques for symmetry-based MCMC by wide margins. Finally, we have released a reference implementation of Con-MCMC for wider use by the research community.


We are grateful to Mathias Niepert for sharing the code implementation of Orbital MCMC and for answering our queries on the code. We would also like to thank the anonymous reviewers for their comments and feedback. We thank Ritesh Noothigattu for discussions and comments on this research. Ankit Anand is supported by TCS Research Scholars Program. Mausam and Parag Singla are supported by Visvesvaraya faculty research awards by Govt. of India. Mausam is also supported by Google and Bloomberg research awards.


  • [Anand et al.2016] Ankit Anand, Aditya Grover, Mausam, and Parag Singla. Contextual symmetries in probabilistic graphical models. In IJCAI, 2016.
  • [Babai2015] László Babai. Graph isomorphism in quasipolynomial time. arXiv preprint arXiv:1512.03547, 2015.
  • [Boutilier et al.1996] Craig Boutilier, Nir Friedman, Moises Goldszmidt, and Daphne Koller.

    Context-specific independence in Bayesian networks.

    In UAI, 1996.
  • [Bui et al.2013] H. Bui, T. Huynh, and S. Riedel. Automorphism groups of graphical models and lifted variational inference. In UAI, 2013.
  • [Darga et al.2008] Paul T Darga, Karem A Sakallah, and Igor L Markov. Faster symmetry discovery using sparsity of symmetries. In Design Automation Conference, 2008.
  • [de Salvo Braz et al.2005] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. In IJCAI, 2005.
  • [Domingos and Lowd2009] Pedro Domingos and Daniel Lowd.

    Markov Logic: An Interface Layer for Artificial Intelligence


    Synthesis Lectures on Artificial Intelligence and Machine Learning. Morgan & Claypool Publishers, 2009.

  • [GAP2015] The GAP Group. GAP – Groups, Algorithms, and Programming, Version 4.7.9, 2015.
  • [Gent et al.2005] Ian P Gent, Tom Kelsey, Steve A Linton, Iain McDonald, Ian Miguel, and Barbara M Smith. Conditional symmetry breaking. In Principles and Practice of Constraint Programming. 2005.
  • [Gent et al.2007] Ian P Gent, Tom Kelsey, Stephen A Linton, Justin Pearson, and Colva M Roney-Dougal. Groupoids and conditional symmetry. In Principles and Practice of Constraint Programming. 2007.
  • [Gogate and Domingos2011] V. Gogate and P. Domingos. Probabilisitic theorem proving. In UAI, 2011.
  • [Gogate et al.2012] V. Gogate, A. Jha, and D. Venugopal. Advances in lifted importance sampling. In AAAI, 2012.
  • [Jha et al.2010] Abhay Kumar Jha, Vibhav Gogate, Alexandra Meliou, and Dan Suciu. Lifted inference seen from the other side : The tractable features. In NIPS, 2010.
  • [Kersting et al.2009] K. Kersting, B. Ahmadi, and S. Natarajan. Counting belief propagation. In UAI, 2009.
  • [Kimmig et al.2015] A. Kimmig, L. Mihalkova, and L. Getoor. Lifted graphical models: A survey. Machine Learning, 99(1):1–45, 2015.
  • [Koller and Friedman2009] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.
  • [Kopp et al.2015] Timothy Kopp, Parag Singla, and Henry Kautz. Lifted symmetry detection and breaking for map inference. In NIPS, 2015.
  • [McKay and Piperno2014] Brendan D. McKay and Adolfo Piperno. Practical graph isomorphism. Journal of Symbolic Computation, 60(0):94 – 112, 2014.
  • [Mladenov et al.2014] M. Mladenov, K. Kersting, and A. Globerson. Efficient lifting of MAP lp relaxations using k-locality. In AISTATS, 2014.
  • [Niepert and Van den Broeck2014] Mathias Niepert and Guy Van den Broeck. Tractability through exchangeability: A new perspective on efficient probabilistic inference. In AAAI, 2014.
  • [Niepert2012] Mathias Niepert. Markov chains on orbits of permutation groups. In UAI, 2012.
  • [Noessner et al.2013] J. Noessner, M. Niepert, and H. Stuckenschmidt. RockIt: Exploiting parallelism and symmetry for MAP inference in statistical relational models. In AAAI, 2013.
  • [Pak2000] I. Pak. The product replacement algorithm is polynomial. In Foundations of Computer Science, 2000.
  • [Poole2003] D. Poole. First-order probabilistic inference. In IJCAI, 2003.
  • [Richardson and Domingos2006] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62, 2006.
  • [Sarkhel et al.2014] S. Sarkhel, D. Venugopal, P. Singla, and V. Gogate. Lifted MAP inference for Markov logic networks. In AISTATS, 2014.
  • [Singla and Domingos2008] P. Singla and P. Domingos. Lifted first-order belief propagation. In AAAI, 2008.
  • [Singla et al.2014] P. Singla, A. Nath, and P. Domingos. Approximate lifting techniques for belief propagation. In AAAI, 2014.
  • [Van den Broeck and Darwiche2013] Guy Van den Broeck and Adnan Darwiche. On the complexity and approximation of binary evidence in lifted inference. In NIPS, 2013.
  • [Van den Broeck and Niepert2015] G. Van den Broeck and M. Niepert. Lifted probabilistic inference for asymmetric graphical models. In AAAI, 2015.
  • [Van den Broeck et al.2011] G. Van den Broeck, N. Taghipour, W. Meert, J. Davis, and L. De Raedt. Lifted probabilistic inference by first-order knowledge compilation. In IJCAI, 2011.
  • [Venugopal and Gogate2012] D. Venugopal and V. Gogate. On lifting the Gibbs sampling algorithm. In NIPS, 2012.
  • [Venugopal and Gogate2014] Deepak Venugopal and Vibhav G Gogate. Scaling-up importance sampling for Markov logic networks. In NIPS, 2014.
  • [Walsh2006] Toby Walsh. General symmetry breaking constraints. In Principles and Practice of Constraint Programming. 2006.