1 Introduction
A substantial part of human knowledge and reasoning is based on the idea of a partwhole hierarchy in which an entity at a given level represents a part of a whole at a higher level and is itself composed of parts defined at a lower level. This notion is crucial to the natural sciences (elementary particle, atom, molecule, cell, organ, organism, etc), but is also at work in the structure of documents (letter, word, sentence, paragraph, section, chapter, book), music (single note, chord, chord progression, piece etc.), or architecture (wall, room, wing, building). Even plans of action are often best represented in terms of hierarchies of goals and subgoals. While it may be debatable if reality itself can be said to exhibit this kind of structure, it is clear that the human mind frequently resorts to the principle of hierarchy in organizing complex structure. In this paper, we take a mathematical formulation of this idea provided by Michael Leyton [8] based on the grouptheoretic notion of wreath product and show how one can build probabilistic models of shape that discover a hierarchical generative representation  providing the basis for understanding and manipulating the shape.
The view of computer vision as inverse computer graphics is very elegant and has a long history in the field. The key idea is to define a graphics language to describe the generative process for creating a class of images and—given an image—to infer its generative history in terms of that language. To account for irregularities, noise, and ambiguity, we define a stochastic rendering process that mitigates the rift between the platonic graphics language and the reality of the image
[9].While it is difficult to make this paradigm work for general classes of images, we focus on the special case of handdrawn, highly symmetric geometric sketches.What would be a good graphics language to describe geometric sketches? Typical drawing tools (including those in PowerPoint) provide graphics primitives such as points, lines, circles and squares. Using grouping, copy, and alignment it is possible to create figures that reuse certain elements of the figure to ensure consistency and regularity. However, the true underlying constraints and regularities are often lost because the full generative history is not represented. As a consequence, it is often difficult to edit a sketch while preserving its underlying structure.
In his book ”A Generative Theory of Shape” [8], Michael Leyton proposes a graphics language that is totally generative and captures what he calls the maximisation of transfer and recoverability. The key idea is to describe the emergence of shape as a generative process that unfolds structure from previously unfolded substructures — eventually going back to a single point: the origin. The maximisation of transfer means that as far as possible the shape is ”explained” by reusing existing building blocks. Once a given shape is understood in terms of such a totally generative history, it can be intelligently manipulated by changing substructures (which may appear repeatedly in the unfolded shape), completing an incomplete shape based on the inferred regularities or using it as a building block in a superstructure.
To make Leyton’s theory practical, we introduce the stochastic wreath process, which generalizes Leyton’s formalism to the case of noisy shapes. While Leyton’s generative theory of shape characterises a given highly regular shape, the stochastic wreath process represents a distribution over shapes—which have irregular appearance but highly regular structure. The noise process factorizes across the different hierarchical levels of the shape (one per group factor in the chain of wreath products), and hence is perfectly aligned with the generative process.
The stochastic wreath process allows us to make Leyton’s theory practical in the sense that for a given handdrawn sketch of a shape we can infer a posterior distribution over generative histories in terms of the wreath process. To explore this idea, we define a rendering pipeline based on the wreath process which generates actual pixel images and propose a reversible jump MCMC method [4] inspired by Approximate Bayesian Computation [14] for inference. Note that the model class we describe can also be viewed as a domain specific probabilistic programming language with the inference process attempting to synthesize appropriate models.
2 A Generative Model of Shape
2.1 Leyton’s Generative Theory
Leyton [8] characterizes the structure of a shape by the (ordered) sequence of actions on the canvas that led to its creation, its generative history. In a broad sense, geometric objects are seen as memory stores of a set of actions. These actions are modelled by a series of (algebraic) groups of transformations.
2.1.1 Preliminaries

(Groups) A group is a nonempty set G together with a binary operation on that satisfies the following properties:

(Group action) Let be a group and be a set. is said to act on if there is a map such that: where is the neutral element of and that .
Commonly the abbreviation is used.
In this paper we consider three major families of transformation groups. These transformations act on a two dimensional vector space
. To simplify the presentation, we work within an extended space with elements in , where the third dimension is set to unity such that affine transformations can be expressed as matrix multiplications.
Translations (along the and axis):
(1) where
for translations along the axis. And which can be either continuous () or discrete ().

Rotations (about the origin ) of discretization :
(2) where ,
and which can be infinite ( with ), or discrete (, with , , finite), where denotes the cyclic group of order .

Mirroring/Reflexion (along the and axis):
(3) where
for mirroring along the axis, and .
2.1.2 Generating/Drawing a line
Let us consider the scenario of drawing a horizontal line: we start with a point, call it , and in order to create a line, we translate this point along the horizontal direction. We will model this action by a continuous translation group as defined in Eq. 1. More precisely, for each element in translationgroup we make a copy of the point , call it , and then transfer to the desired location by letting act on it: this will produce a new point on the canvas at distance away from .
If we were to apply the whole (continuous) translation group , where , we obtain an infinite line around the starting point . This is not always desirable, instead more commonly we would like to draw a (bounded) linesegment. This can be done by noticing that in order to create a segment we need only a subset of the elements of the full translation group considered above. We will call the subset of indices of these group elements (that are presented in the picture) the occupancy set, , associated with the desired shape. In the case of continuous groups, we allow to be specified as an interval (i.e. the unit segment centred around the origin will have occupancy ) and for the discrete cases, can be an arbitrary selection of the available indices in . Furthermore, we will often use the following terminology to describe the occupancy set: full occupancy if ; single occupancy if (set contains only element) or arbitrary otherwise.
2.1.3 Generating/Drawing a square
Let us move on to more complex structures and consider describing the generative process behind drawing a square. Remember that the central idea of this generative process is maximization of transfer. We have previously seen how to characterize the generative process of drawing a linesegment, which could be one side of the square. Let us start by drawing the top side of the square. To create a square we wish to transfer this side via a fold rotation group (Eq. 2). As before, we start by creating copies of the top side, one for each element in the transformation group for a total of four. Then we let each element of the group act on its copy to create each side of the square. The process is depicted in Figure 1.
The generative process of the square, as described so far, starts with any point on the top side or — given the right choice of occupancy — indeed anywhere on the implied infinite line of which the side is a finite segment. In this paper, all the generative processes we are considering will start at the origin (defined as the centre point of the canvas). Thus, the complete generative history of a square will start by translating the origin to a point on the top side of the square. This translation will have an occupancy set of cardinality , containing only the index corresponding to the translation transformation that maps the origin onto — it will leave no trace. After this initial translation of the origin, the generative process continues as described above.
2.1.4 Transfer as a Wreath product
In both scenarios what happens amounts to two steps: we make copies of the original shape, one for each element in the transformation group and then for each of these copies we apply the corresponding group element to transfer it to its desired location, form, or orientation. This construction is algebraically modelled by the wreath product between the initial shape generative history and the transformation group we want to employ in the transfer.

(Semidirect products) Consider and two groups, with their respective group operations and and a group homomorphism ^{1}^{1}1 is the automorphism group of . An automorphism of an object is a isomorphic map from the object to itself  i.e a mapping from the object to itself preserves the structure. Additionally, it can be shown that the set of all automorphisms of an object forms a group under the composition operation.. Now, let
be the set of ordered pairs
, with , . We can define a binary operation on as follows:for all . Then under this operation, is a group, denoted by and referred to as the semidirect product of N and H under .

(Wreath product) Let and be two groups and let be a set with acting on it. Let be the direct product of the copies of indexed by the set : . Then we can define the action of on in a natural way by letting the group action of act on the indices of the product:
Given this action, the wreath product of and is defined as the semiproduct , where is implicitly given by the action above, .
We can see from this definition that in order to define a wreath product we need two groups and a set. The first one, , will correspond to the generative history of the initial shape that we want to transfer and the second one, , will correspond to the transformation group used in the transfer.^{2}^{2}2 Terminology used in Leyton’s literature: the generative history of the initial shape that we would like to transfer is referred to as the fibregroup and will be denoted by ; the group responsible for the transfer of ’s generative history will be referred to as the controlgroup and will be denoted by . And the set corresponds to the set of indices associated with the transformation group. At this point, it should be clear that the descriptions of the generative structures will be groups under full occupancy. Partial occupancy, i.e., the situation in which only a subset of elements of the shape is displayed, can be integrated into the grouptheoretical framework by appending cyclic switches to the fibre copies before application of the control group. These can be seen as colour or on/off switches for parts of the shape.
2.1.5 Shapes as nfold wreath products
We would like to apply the same concept of transfer several times in the formation of an image in order to maximize reusability. For instance consider that once we have a square we would like to form a circle of four such squares. Using the ideas highlighted above, we could transfer the already constructed square by another 4fold rotation group. But first, recall that the rotations we consider are only around the origin and the previously formed square is centered about the origin  thus if we were to apply a 4fold rotation to this square, we will end up with four coinciding copies of this square. Therefore, first we need to translate the square by the intended radius of the circle we want to form. This will give rise to a new shape that can be characterized by a 5fold wreath product:
where

is the trivial group corresponding to the origin (it transfers the origin onto itself)

is the continuous translation group , responsible for the vertical translation of the origin to the point on the top side.

is the continuous translation group , responsible for producing an infinite line.

is the first 4fold rotation group , responsible for producing a square.

is the discrete translation group , responsible for translating the formed square units away from the origin (anticipating the rotation)

is the second 4fold rotation group , responsible for producing the circle of squares.
2.1.6 A Grammar for Shapes
While Leyton develops his theory as abstract mathematics, our concern is to have a concrete representation suitable for probabilistic inference. Hence, we introduce the following grammar to represent shapes.
A shape denotes (1) the generative history, the fold wreath product where is the implicit trivial group corresponding to the origin, but also (2) the associated occupancies for each level, that characterize the elements of this structure observed in the picture.
We write our circle of squares example as follows:
2.2 The Wreath Process: Stochastic Shape
To model handdrawn sketches, we define a noise process that accounts for the imperfections present in freehand drawings, and that arises naturally by perturbing the generative history of the intended shape. More precisely, each transformation present in the generative history of our shape will have a noise level which accounts for the error made by the user when trying to perform the transfer corresponding to that transformation.
Let us consider that the intended structure (shape) is given by an fold wreath product () then , a level of noise will be sampled to account for each application of each element in the controlgroup. In the generative history an element of a group is applied multiple times, as many times as it is copied by the levels above it. Although in the exact shape the copies of these transformations are the same, under the noise process each of these copies receives its own perturbation independent of the noise instances corresponding to other noisy copies: sample , where and can thought of as the coordinate of the subtree on which we act, out of the copies of that fibregroup that were created by repeat transfer (each time a new transformation group was applied). Defining the probability distribution of possible perturbations for a group we assume that there exits an embedding of into a bigger (continuous) group, on which both noisy and nonnoisy transformations are defined. ^{3}^{3}3Usually there is a natural way of defining this embedding, but we could also construct an embedding by another wreath product, if there is no other more trivial embedding.
For us, this is trivially the case as we have already considered this embedding when defining the continuous occupancy. Both the continuous and discrete versions of the translation and rotation groups can be embedded in the continuous groups. The noisy transformations obtained by composing with the sampled noise instances are applied to the corresponding fibrecopies. We denote the set formed by these noise actions on the 2D plane as for each control group . The process is illustrated in Figure 2. Under this interpretation we can define a noisy shape as . In the following we will denote by the set of all noise instances corresponding to a shape .
2.3 Prior Over HandDrawn Shapes
Under the Bayesian paradigm, we need to specify a prior over the model’s parameters :
Prior over generative histories
(4)  
where, in the our particular case of transformation groups: is considered to be uniform at any level . The prior over reflects a strong preference for single (with probability ) or full occupancy (with probability ) and with probability we will consider special occupancy, which will be sampled as follows: if then a particular element of the index set is to be switched on with probability , otherwise we define a bounding parameter (sampled around the origin) which restricts to a finite set and then we will sample uniformly within that restriction. For continuous occupancy, in this work, we will restrict ourselves to linesegments of unit length and full circles.
Noise instances prior :
(5)  
where are the hyperparameters governing the distribution of the noise instances and is the vector of noise instances at level . This vector has elements. The prior over the noise instances is as follows: for translation where ; for rotation: with , where is the Bessel function of order (); for mirroring: we do not consider an explicit noise action, but by the way we defined the wreath process, a mirrored copy of a given noisy shape will be different from , as this noisy copy of will have its own noise instances—sampled independently of the noise instances of .
3 Inference
Consider the greyvalue mapping of an image. Our aim is infer the generative history of the shape in this image. This amounts to inferring a structure representing the
fold wreath product describing the underlying symmetries present in the shape, plus their observed occupancy. Using Bayes’ rule, we can express the posterior probability as:
(6) 
Thus, for this computation, we need to specify and evaluate a prior over generative histories of shapes and a likelihood that evaluate the input data on a given shape . The prior encodes our beliefs about the kind of shapes we expect to see and may be chosen conveniently to ensure tractability. That leaves us with the computation of the likelihood, which in this case is nontrivial as and occupy very different domains. However, this fairly common problem that can be addressed using approximate Bayesian computation (ABC) methods that bypass the (exact) evaluation of the true likelihood. The idea is as follows: given a parameter setting , a dataset can be simulated for the stochastic model specified by . Then a distance measure, , can be defined between the input data and (that now live in the same space). If the simulated data does not match the input data within a given tolerance, i.e. , then the set of parameters will be rejected. This idea was particularized to MCMC simulations in [10] and [12]. Note that these require the specification of the threshold , which might require further inference. However, more recently in [14] and [9], it was shown that the hard specification of this threshold could be replaced by a stochastic likelihood model. Our inference will combine these ideas, as described in the next section.
3.1 Abc for the Generative History of Shapes
We start by rendering the image corresponding to to get into the same domain as the data using a deterministic rendering function . In principle, we could define a measure on the space of images and compute how far the exact rendering of our proposed model is with respect to the input image . But defining such a measure is problematic, as most such measures will induce very sharp distributions on the rendered image . The likelihood will yield high values for exact and almost exact matches, while most other models will be given a likelihood close to zero. In other words, this approach will likely fail to discriminate close solutions from arbitrary proposals, except for the unlikely case that an (almost) perfect match is rendered.
To overcome this problem, we make use of the ideas in approximate Bayesian computation highlighted before and follow the approach in [9]
. We estimate the likelihood
using a stochastic likelihood model based on a stochastic image renderer. Instead of rendering the exact image, we render a noisy version of it as illustrated in Tables 1. In addition, to increase stochasticity we apply a Gaussian blur to the rendering, specified by .(7) 
where is generated via the wreath process, given and is the image resulting from applying a Gaussian blur of and a window size of . Under this formulation and keeping in mind that the rendering function although stochastic by nature, becomes deterministic given and , the posterior probability can be approximated as:
Prior over the parameters of the Gaussian blur : and .
3.2 Empirical Likelihood
The stochastic render will produce a noisy instance of model against which the input image
can be evaluated. To this end, we define the empirical likelihood, assuming a Bernoulli distribution in pixel space:
where denotes the (greyscale) intensity of pixel located at in image and is interpreted here as the probability of that pixel being black.
3.3 Reversible JumpMcmc for the Wreath Process
In this section, we will give a generalpurpose algorithm for inference in wreath processes, based on Reversible JumpMCMC (introduced in [4], and refined in [5]), but particularize it in the proposals to exploit the structure of the wreath product. The idea is the following: we assume that the upperlevel structure (of the top level groups) has the greatest impact on the appearance of a shape and hence should be kept more stable than lowerlevel parts of the generative history. In other words, lower levels can be explored given upper levels, but changes in upper levels are likely going to lead to major revisions in the lower levels. Assuming that the higherlevel structure has been detected, we propose objects on which this upper structure acts by transfer.
and random variables
and play the role of matching the dimensionality of the embeddings and : i.e.(8)  
Let us look at what Algorithm 1 amounts to in this case. Given a model where indicates that is an fold wreath product, we wish to propose a new model . First with a given probability we choose which parameters to resample: , or . Varying , in most cases varies too. The other two types of parameters can be sampled independently and when possible we would like to keep all other parameters fixed—we are making local changes only in one dimension type at a time. Thus in our case:
where is a global scaling factor, that is sampled uniformly between . This corresponds to our (inferred) unit. Freehand sketches might not respect a standardized unit (like cm), but we postulate that the user has in mind an implicit grid with this as unit interval.
Noise instances proposal:
Gaussian blur proposal: are sampled from the prior, which was previously described. Let us concern ourselves with the proposals in the structure of the wreath product:
Shape proposal: Given , we propose a new structure and implicitly a new shape as follows. We have two main types of moves: one that changes the dimensionality of the structure through , and one that does not. The moves within a model (keeping constant) will change the occupancy sets or the individual groups, but will prefer to move within the same family of transformation groups. Changes in the dimensionality of the wreath product are as follows: we pick a random level (with higher probability for lower s), at which to segment the structure. We keep the upperlevel structure from level to and we resample (from the prior) the group on which this structure acts. As mentioned before, this corresponds to keeping the higherlevel symmetries and changing the object on which these symmetries act. The nature of these proposals keeps a nested structure over models, that with sampling components from the prior greatly simplifies the computation of the acceptance ratio in Eq. 8, as the determinant factor is always and several other simplifications are possible as the prior factorizes over levels and new fibre groups are sampled from this prior.
Input  Intermediate iterations  

(a)  (b)  (c)  (d)  (e)  (f)  
4 Experiments
In the following, we describe a set of experiments, undertaken to illustrate the usability of the wreath process in discovering symmetry structures in noisy 2D shapes.
Noise instances as part of the model:
Firstly, we constructed a data set of images sampled from the previously defined shape prior, which we will regard as ground truth. For each of these, we will consider: the exact rendering of the wreath product sampled and a handdrawn version of it. On this dataset we perform two type of inferences: one as described in Section 3.3 and one without accounting for the noise described by the Wreath process  in this case the sampling procedure is similar to the one presented in Section 3.3, but we do not have any proposals involving . This initial experiment was done to assess the impact and importance of keeping track of the noise instances explicitly in the model. As a result of this experiment, we observed a substantially higher average recoverability rate of structure, especially for handdrawn images. Thus, the rest of the experiments were carried out by inferring a model , although in most cases the quantity we are primary interested in is .
Measure of performance:
We have defined two measures to quantify our results:

(Full) Recoverability  the inferred structure matches the group structure, or an equivalent version, including the right occupancy sets.

Recoverability up to occupancy  the wreath product has been successfully inferred, but the occupancy is not quite right.
An example of correctly inferred structure, but with slightly off occupancy is the example in Table 2, where the top level translation is inferred correctly, and so is the rotation group before it, but the occupancy there accounts for three elements being switched on whereas we actually observe only two. It is important to note that this is by no means optimal in assessing recoverability of structure, as for instance, the last two examples in Table 3 will score under both these measures, although clearly a lot of the structure, in particular the higher level control groups, are recovered. Unfortunately, quantifying partial recoverability is very problematic because of the equivalence between models under the projection on the canvas and the limited occupancy. In general, there are several possible explanations of a partially observed structure, and various ways of constructing the same object.
4.1 Recovering Samples From the Prior
To evaluate our model and explore its capabilities of recovering structure, we construct a data set of examples sampled from the prior. We limit the level of complexity to compositions of at most groups.Complexities higher than that tend to lead to highly dense images, for which the number of copies of basic fiber groups tends to be very high and require a high resolution to properly distinguish them. On the other hand, if the occupancy is low, for such a dense structure, there is very little information to differentiate between possible explanations  such cases usually look random to the naked eye. Sample runs can be seen in Table 3 and quantitative results are reported in Table 4.
Input()  Intermediate iterations  Inferred Image(MAP)  

Model()  
(Exact)  
(Drawn)  
(Drawn)  
(Exact)  
(Drawn) 
Data Set  Recoverability  Recoverability up to occ. 

Exact Shape  
Handdrawn 
4.2 Applications
Recovering structure from partial occupancy.
In our dataset, we included partial occupancy at different levels and by the nature of our sampling we will revisit and propose occupancy changes with higher probability at the top levels. In the context of sketches, these examples correspond to partially observed structure  unfinished drawings  for which, given enough copies, the intended structure can be inferred and be employed to make suggestion or automatic fillins. Some examples of such samples can be viewed in Table 5. We also report the sample with the highest likelihood, , and the sample with the highest occurrence in the posterior,
. This is not a simple inference problem: usually the full structure is inferred before a perfect recovery is achieved, thus the sampler has to be quite confident in the discovered structure in order to overcome not explaining fully the data, which is penalized by the likelihood. This tradeoff is mediated by the noise instances and the blurring intensity. Once the model starts to match well parts of the input, the blur width and variance start to decrease, which increases the likelihood. This is only possible if the noise instances
match well the perturbations present in the input. This is why accounting for in the model was found to be essential for handdrawn samples.Input()  Inferred image  Inferred Structure  

Possible expressions  
,  
Common regular structures
As most of the examples sampled from the prior looked rather abstract, expecially the ones with more complex structure, we look at some more common regular structures. First, we look into recovering regular polygons. We have already seen the example of the square and we can express any regular polygon in a similar fashion. One way of doing it is: starting with the origin, we translate it horizontally to make a line, then translate the line vertically, and then perform the fold rotation. Below is the description in our grammar:
.
The above control group, applied to the origin, will produce an sided regular polygon, of length and height . The structure is quite simple and arises naturally in various samples from the prior, but exact recovery is a more challenging problem as we have a strong preference for occupancy that represents integer multiples of the scale unit . In general, we found that this is a reasonable assumption, but in this particular case there is a deterministic relation between and and in most of the case there is no choice of that will assure both and to be integer multiples of such a unit.

The recovery of the rotation and translation symmetries is not affected, but the deterministic/constraint relationship between and of exact regular polygons actually encodes additional structure. This can be captured by a slight change to the control group^{4}^{4}4We define the scaling group :
(9) 
where . The control group above can be applied to any other fiber group to create a regular polygon using this fiber group as the building block.
Input  

Predefined Gridlike structures.
We also tried out some common regular structures (Table 8), like grids, having regular polygons as fiber groups. To speed up the inference, we predefined the regular polygon control group described in Eq. 9 and used it in the proposal mechanism, as preferred structure. We report the shape scoring the maximum likelihood and the one with the highest posterior. Inference will prefer simpler explanations, as it can be seen from the third and fourth example. The complexity of the original shapes is slightly higher, but it explains well actual fiber copies in the input, and in fact most of the pixels in the input. In principle, we could force the likelihood to penalize more unexplained pixels, if full recoverability if important. But as seen before a more forgiving likelihood allows for inference of structure with unobserved copies. Depending on the application, one would need to trade complexity and fidelity to the input, .
Input  

Architectural sketches: Floor Plans
Lastly, we download floor plans sketches of two famous buildings: the Dome of the Rock ^{5}^{5}5http://en.wikipedia.org/wiki/Dome_of_the_Rock, representative of Islamic architecture and the Villa La Rotonda ^{6}^{6}6http://en.wikipedia.org/wiki/Villa_Rotonda, landmark of Palladian architecture, as presented in Table 9.
Input()  Samples from the posterior  

5 Related Work
Wreath products have been applied previously to computer vision and image processing, in particular for multiresolution analysis generalizing approaches based on Haar/Fourier transform
[3] [2].Treating vision as an inverse inference problem aims to estimate the causes and factors that describe a generative history  generally proposing some hierarchical representations. These usually employ a bottomup generative process, coupled with some kind of topdown validation and have been successfully used in image and scene parsing, but they usually require expert knowledge in setting up the hierarchy and encoding a known high level structure(like spacial relationships between objects/primitives) [13] [6]. In contrast, the wreath process automatically detects this structure by maximization of transfer.
Most literature on sketch beautification often employs beautification by recognition: they provide a vocabulary of primitives and any object in the data must be represented in this vocabulary. The approach has limited generalization by itself. More recently, the idea of constructing more complex objects out of a group of easily detectable primitives was used in [11] [7]. Note that such methods could be used in conjunction with the wreath process.
6 Conclusion
In summary, the three main contributions of this paper are: We propose the stochastic wreath process as a new, highly structured random point process, thus generalizing Leyton’s generative theory of shape; We propose an inference scheme for inferring structure and parameters of the wreath process for a given observed pixel image; We report on experimental results of the inference based on both modelgenerated as well as handdrawn images of geometric shapes.
While our experiments were restricted to the domain of two dimensional monochromatic geometric figures, the same kind of hierarchical generative model can also be applied to threedimensional shapes. Also, as mentioned in Leyton’s book, the action of the group itself can be different from inking and could also include cutting away of material or similar shapecreating actions.
Finally, a wreath product representation can be viewed as providing a natural coordinate system for a shape in the most general sense. For example, in the case of the square, the wreath product representation provides a set of natural coordinates for every point on the square specifying which side the point is on, and where on that side it is located. In this sense, discovering the underlying wreath process of a shape can be understood as finding a meaningful coordinate system for describing parts of that shape. This principle can be generalized to other structures, including finite state automata, and the stochastic wreath process and associated inference might find applications in such other domains, for example in the analysis of genetic regulatory networks as outlined in [1].
References
 [1] Attila EgriNagy and Chrystopher L Nehaniv. Hierarchical coordinate systems for understanding complexity and its evolution, with applications to genetic regulatory networks. Artificial Life, 14(3):299–312, 2008.
 [2] Richard Foote, Gagan Mirchandani, and Daniel Rockmore. Twodimensional wreath product groupbased image processing. Journal of Symbolic Computation, 37(2):187–207, 2004.
 [3] Richard Foote, Gagan Mirchandani, Daniel N Rockmore, Dennis Healy, and Tim Olson. A wreath product group approach to signal and image processing. i. multiresolution analysis. Signal Processing, IEEE Transactions on, 48(1):102–132, 2000.
 [4] Peter J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82:711–732, 1995.
 [5] Peter J Green and David I Hastie. Reversible jump mcmc. Genetics, 155(3):1391–1403, 2009.
 [6] Feng Han and SongChun Zhu. Bottomup/topdown image parsing with attribute grammar. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 31(1):59–73, 2009.
 [7] Levent Burak Kara and Thomas F. Stahovich. Hierarchical parsing and recognition of handsketched diagrams. In Proceedings of the 17th annual ACM symposium on User interface software and technology, UIST ’04, pages 13–22, New York, NY, USA, 2004. ACM.
 [8] Michael Leyton. A Generative Theory of Shape. Number LNCS 2145 in Lecture Notes in Computer Science. SpringerVerlag, 2001.
 [9] Vikash Mansinghka, Tejas D Kulkarni, Yura N Perov, and Josh Tenenbaum. Approximate bayesian image interpretation using generative probabilistic graphics programs. In Advances in Neural Information Processing Systems, pages 1520–1528, 2013.
 [10] Paul Marjoram, John Molitor, Vincent Plagnol, and Simon Tavaré. Markov chain Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 100(26):15324–15328, 2003.
 [11] Brandon Paulson and Tracy Hammond. Paleosketch: accurate primitive sketch recognition and beautification. In Proceedings of the 13th international conference on Intelligent user interfaces, IUI ’08, pages 1–10, New York, NY, USA, 2008. ACM.
 [12] S. A. Sisson, Y. Fan, and Mark M. Tanaka. Sequential Monte Carlo without likelihoods. Proceedings of the National Academy of Sciences, 104(6):1760–1765, 2007.
 [13] Zhuowen Tu, Xiangrong Chen, Alan L Yuille, and SongChun Zhu. Image parsing: Unifying segmentation, detection, and recognition. International Journal of Computer Vision, 63(2):113–140, 2005.
 [14] R. D. Wilkinson. Approximate Bayesian computation (ABC) gives exact results under the assumption of model error. ArXiv eprints, November 2008.
Comments
There are no comments yet.