# Shadow Simulated Annealing algorithm: a new tool for global optimisation and statistical inference

This paper develops a new global optimisation method that applies to a family of criteria that are not entirely known. This family includes the criteria obtained from the class of posteriors that have nor-malising constants that are analytically not tractable. The procedure applies to posterior probability densities that are continuously differen-tiable with respect to their parameters. The proposed approach avoids the re-sampling needed for the classical Monte Carlo maximum likelihood inference, while providing the missing convergence properties of the ABC based methods. Results on simulated data and real data are presented. The real data application fits an inhomogeneous area interaction point process to cosmological data. The obtained results validate two important aspects of the galaxies distribution in our Universe : proximity of the galaxies from the cosmic filament network together with territorial clustering at given range of interactions. Finally, conclusions and perspectives are depicted.

• 1 publication
• 4 publications
• 5 publications
• 1 publication
03/17/2018

### A simulated annealing procedure based on the ABC Shadow algorithm for statistical inference of point processes

Recently a new algorithm for sampling posteriors of unnormalised probabi...
06/06/2020

### On a family that unifies Generalized Marshall-Olkin and Poisson-G family of distribution

Unifying the generalized Marshall-Olkin (GMO) and Poisson-G (P-G) a new ...
09/30/2020

### Some Remarks on Replicated Simulated Annealing

Recently authors have introduced the idea of training discrete weights n...
01/16/2013

### Reversible Jump MCMC Simulated Annealing for Neural Networks

We propose a novel reversible jump Markov chain Monte Carlo (MCMC) simul...
07/22/2020

### Maximum Likelihood Estimation of a Likelihood Ratio Ordered Family of Distributions

We consider bivariate observations (X_1,Y_1), …, (X_n,Y_n)⊂𝔛×ℝ with a re...
01/06/2020

### Clustering Binary Data by Application of Combinatorial Optimization Heuristics

We study clustering methods for binary data, first defining aggregation ...
03/29/2021

### Inference in the stochastic Cox-Ingersol-Ross diffusion process with continuous sampling: Computational aspects and simulation

In this paper, we consider a stochastic model based on the Cox- Ingersol...

## 1 Introduction

A large class of the mathematical questions issued from data sciences can be formulated as an optimisation problem. The complexity of the data structures requires more and more elaborate models with an important number of parameters controlling different aspects outlined by the data observation process. Within this context, a typical a question is, what is the most probable model able to reproduce the behaviour exhibited by the analysed data. Clearly, the possible answer to this question assumes the existence of a class of models together with priors associated to its parameters.

A natural way to answer this question is the computation of the global maximum of the induced posterior distribution. The classical simulated annealing framework proposes the solution to this problem, provided sampling from the posterior distribution is possible.

Sampling posterior distributions is still a challenging mathematical problem. This is due to the fact that sometimes, the proposed simulation algorithms may be required to make computations of analytical intractable quantities. An example of such quantities is the evaluation of normalisation constants of the probability densities describing the considered models.

If only parameters estimation is considered, the common solution adopted is to provide maximum likelihood computations based on Monte Carlo simulations. This framework allows the user to benefit of the whole theoretical power of the likelihood inference, if one afford the price to pay in terms of computational costs. The computational cost may be excessively high whenever the initial condition is to far away from the desired solution. Furthermore this phenomenon introduces numerical instability of the proposed solution. The only reliable strategy within this context is to re-sample the model, as often as possible, hence increasing the computational cost.

Since less than a decade, a new methodological framework for statistical inference, the Approximate Bayesian Computation (ABC) allows to extend the Monte Carlo likelihood based inference, by providing solutions for sampling approximately from the posterior distribution. The authors in [17] proposed a new algorithm, ABC Shadow, that overcomes the main drawback of the previously mentioned ABC methods: the ABC Shadow allows the output distribution of the algorithm, to be as closed as desired to the aimed posterior distribution.

The work presented in this paper leads directly to a new method of parameter estimation based on a simulated annealing algorithm. To better outline its interest, let us consider the following example, inspired by applications in spatial data analysis.

Let be an object pattern that is observed in a compact window . The observed pattern is supposed to be the realisation of a spatial process. Such a process is given by the probability density

 p(y|θ)=exp[−U(y|θ)]ζ(θ) (1)

with the energy function and the normalising constant. The model given by (1) may be considered as a Gibbs process, and it may represent a random graph, a Markov random field or a marked point process. Let be the conditional distribution of the model parameters or the posterior law

 p(θ|y)=exp[−U(y|θ)]p(θ)Z(y)ζ(θ), (2)

where is the prior density for the model parameters and the normalising constant. The posterior law is defined on the parameter space . For simplicity, the parameter space is considered to be a compact region in with

the size of the parameter vector. Let

be the corresponding Lebesgue measure. The parameter space is endowed with its Borel algebra .

In the following, it is assumed that the probability density is strictly positive and continuously differentiable with respect to . This hypothesis is strong but keeps realistic, since it is often required by practical applications.

This paper constructs and develops a simulated annealing method to compute :

 ˆθ=argmaxθ∈Θp(θ|y).

The difficulty of the problem is due to the fact that the normalising constant is not available in analytic closed form. Hence, special strategies are required to sample from the posterior distribution (2) in order to implement an optimisation procedure.

The plan of the paper is as follows. First, an Ideal Markov chain (IC) is constructed. This chain has as equilibrium distribution the posterior distribution of interest. So, in theory, this chain can be used to sample from the distribution of interest. Next, an Ideal Simulated Annealing (ISA) process built using a non-homogeneous IC chain is constructed. The convergence properties of the process are also given. Despite the good theoretical properties, the ISA process cannot be used to build algorithms for practical use, since it requires the computation of normalising constants of the form . The fourth section presents a solution to this problem. First an approximate sampling mechanism called the Shadow chain (SC) is presented. The SC is able to follow closely within some fixed limits the IC. Based on the SC chain, a Shadow Simulated Annealing (SSA) process is built. This process is controlled by two parameters, evolving slowly to zero. This double control allows to derive convergence properties of the process towards the global optimum we are interested in. These theoretical results allow the construction of a SSA algorithm. The algorithm is applied to simulated and real data, during the fifth section. The real data application fits an inhomogeneous point process with interactions to a cosmological data set, in order to obtain essential characteristics of the galaxies distribution in our Universe. At the end of the paper, conclusions and perspectives are formulated.

## 2 Ideal Chain for posterior sampling

In theory, Markov Chain Monte Carlo algorithms may be used for sampling . For instance, let us consider the general Metropolis Hasting algorithm. Assuming the system is in the state , this algorithm first chooses a new value according to a proposal density . The value is then accepted with probability given by

 αi(θ→ψ)=minψ{1,p(ψ|y)p(θ|y)q(ψ→θ)q(θ→ψ)}. (3)

The transition kernel of the Markov chain simulated by this algorithm is given by, for every

 Pi(θ,A)=∫Aαi(θ→ψ)q(θ→ψ)1{ψ∈A}dψ+1{θ∈A}[1−∫Aαi(θ→ψ)q(θ→ψ)dψ]. (4)

Let us recall that the transition kernel of a Markov chain may act on both functions and measures, as it follows

 Pif(x)=∫ΘPi(x,dy)f(y),μPi(A)=∫Θμ(dθ)Pi(θ,A). (5)

The conditions that the proposal density has to meet, so that the simulated Markov chain has a unique equilibrium distribution

 π(A)=∫Ap(θ|y)dν(θ),

are rather mild [24]. Furthermore, if and are bounded and bounded away from zero on the compact , then the simulated chain is uniformly ergodic ([24] Prop. 2, [13] Thm. 2.2). Hence, there exist a positive constant and a positive constant such that

 supθ∈Θ∥Pni(θ,⋅)−π(⋅)∥≤Mρn,n∈N.

For a fixed , a parameter value and a realisation of the model given by (1), let us consider the proposal density

 q(θ→ψ)=qδ(θ→ψ|x)=f(x|ψ)/ζ(ψ)I(θ,δ,x)1b(θ,δ/2){ψ} (6)

with . Here is the indicator function over , which is the ball of center and radius . Finally, is the quantity given by the integral

 I(θ,δ,x)=∫b(θ,δ/2)f(x|ϕ)ζ(ϕ)dϕ.

This choice for guarantees the convergence of the chain towards and avoids the evaluation of the normalising constant ratio in (3). We call the chain induced by these proposals the ideal chain. Nevertheless, the proposal (6) requires the computation of integrals such as , and this is as difficult as the computation of the normalising constant ratio. Later in the paper it will be shown how this construction allows a natural approximation of the ideal chain: the shadow chain.

## 3 Ideal Simulated Annealing process

### 3.1 Principle

The construction and the properties of a Simulated Annealing (SA) process in a general state space were investigated by [6, 7].

The SA process is built with the following ingredients: a function to optimise , a Markov transition kernel with equilibrium distribution and a cooling schedule for the temperature parameter .

The function is assumed to be continuous differentiable in and its global maximum is . In the present situation is obtained by taking the logarithm of (2):

 h(θ)=U(y|θ)+logζ(θ)−logp(θ)+logZ(y). (7)

It represents the loglikelihood function to which the prior term is added. Clearly, if

is the uniform distribution over

, then maximising leads to the maximum likelihood estimation. The transition kernel is given by (4). The cooling schedule for the temperature is a logarithmic one, and it results from the proofs of the SA convergence.

The SA process simulates iteratively a sequence of distributions

 πn(A)=∫Apn(θ)dν(θ)=∫Aexp(−h(θ)/Tn)cndν(θ) (8)

with and , while goes slowly to zero.

Each distribution is simulated using a transition kernel having it as equilibrium distribution. The transition kernel is obtained, by modifying (4) in order to sample from . The kernels sequence induces an inhomogeneous Markov chain.

At low temperatures, the process converges weakly towards the global optimum of , that is

 πn⟶n→∞δθopt (9)

with the Dirac measure in , while considering the Hausdorff topology [6, 7].

### 3.2 Definition, properties and convergence

In order to define the SA process and to present its main convergence result, the Dobrushin coefficient, its properties and some extra-notations are introduced.

Let be a state space with a probability measure on and let be the set of all probability measures on the space . Throughout the entire, the norm is the total variation norm, that is for any :

 ∥μ∥ = supA∈T|μ(A)| (10) = sup|g|<1|μ(g)|=sup|g|<1|∫Θg(θ)μ(dθ)|.

Let be a transition kernel on . The Dobrushin coefficient is defined as :

 c(P)=supθ,ψ∈Θ∥P(θ,⋅)−P(ψ,⋅)∥=supθ,ψ∈ΘsupA∈T|P(θ,A)−P(ψ,A)|. (11)
###### Lemma 1.

The Dobrushin coefficient, defined by (11), verifies the following properties:

1. [label=()]

2. ,

3. ,

4. .

A proof of the previous result is given below. The reader may also refer to [8, 25, 10, 29] for more details.

Proof: . Clearly by definition (11). The is also immediate by using the property that is a transition kernel. For all and all , so Thus, by taking the supremum in and it results that .

. Let us recall first a classical result. If and are from and is a smooth function we have, by using  (5):

 |μ(g)−ν(g)|≤12supθ,ψ∈Θ|g(θ)−g(ψ)|∥μ−ν∥. (12)

Let us prove now the inequality. By the definition of the total variation norm (10), the formula (5) and the definition (11) we get:

 ∥μP−νP∥=supg,|g|≤1|(μP)g−(νP)g|≤supg,|g|≤1{12supθ,ψ∈Θ|Pg(θ)−Pg(ψ)|∥μ−ν∥}≤c(P)∥μ−ν∥, (13)

by applying (12).

. It can be directly obtained, by using (11) and and lastly (12)

 c(P1P2)=supθ,ψ∈Θ∥P1P2(θ,.)−P1P2(ψ,.)∥=supθ,ψ∈Θ∥P1(θ,.)P2−P1(ψ,.)P2∥≤c(P1)c(P2).

An immediate consequence of the previous result is that

 c(P1P2⋯Pn)≤n∏i=1c(Pi). (14)

Assume that is a sequence of transition kernels on and that is a given initial distribution. The sequence and the distribution define a discrete time non-homogeneous Markov process with the state space . Here, we are interested in the successive distributions

of the random variables

. Throughout the paper for , the following notation is used

 P(m,k)=Pm+1Pm+2⋯Pk,

so that one has .

In the following it is assumed that the function to optimise is measurable, positive and bounded. For the sake of simplicity it is assumed that the global minimum of is scaled to zero. The global maximum is obtained by considering the function . It is denoted by

 Mh={θ∈Θ|h(θ)=0} (15)

the minimum set of and by the maximum variation of .

###### Definition 1.

Let be the function to be optimised in the state space , a sequence of transition kernels, . A simulated annealing process is the non-homogeneous Markov process on defined by

 Pj(θ,A)=∫Aαj(θ→ψ)q(θ→ψ)1{ψ∈A}dψ+1{θ∈A}[1−∫Aαj(θ→ψ)q(θ→ψ)dψ], (16)

with

 αj(θ→ψ)=min⎧⎨⎩1,[exp(−h(ψ))exp(−h(θ))]1/Tjq(ψ→θ)q(θ→ψ)⎫⎬⎭.

Again there is a lot of freedom in choosing the proposal distribution. Similarly to (6), we consider the distribution

 q(θ→ψ)=qδ(θ→ψ|x)=[f(x|ψ)/ζ(ψ)]1/TjIj(θ,δ,x)1b(θ,δ/2){ψ} (17)

for a fixed , a parameter value and a realisation of the model . The quantity is given by the integral

 Ij(θ,δ,x)=∫b(θ,δ/2)[f(x|ϕ)/c(ϕ)]1/Tjdϕ.

The SA process build with proposals (17) is called Ideal Simulated Annealing (ISA).

The following result states the convergence of the SA process for the optimisation of our problem, that is whenever  (see [6, 7]), where is the set defined in (15).

###### Theorem 1.

Let be a SA process. Suppose that there exist sequences and and a number such that is increasing and . Assume also that the following conditions hold:

1. [label=()]

2. For each

 c(P(nj,nj+1))≤1−(1−d)exp⎡⎣−nj+1∑l=nj+1ΔhTl⎤⎦; (18)
3. ;

4. , where

Then

 limj→∞∥μj−πj∥=0. (19)

The proof of this theorem is an adaptation of the proof given in [6].

Proof: Suppose that is given. The condition implies that one can find integers and such that for all we have:

 k∑j=rkexp⎡⎣−Δhnj+1∑l=nj+11Tl⎤⎦>−log(ε/4)1−d. (20)

By using now the standard inequality for and the relation (20) it gets:

 k∏j=rk⎡⎣1−(1−d)exp⎛⎝−Δhnj+1∑l=nj+11Tl⎞⎠⎤⎦

The condition  is used to obtain

 c(P(nj,nj+1))≤1−(1−d)exp⎛⎝−Δhnj+1∑l=nj+11Tl⎞⎠.

By doing now the product of these relations for it results

 k∏j=rkc(P(nj,nj+1))≤k∏j=rk⎡⎣1−(1−d)exp⎛⎝−Δhnj+1∑l=nj+11Tl⎞⎠⎤⎦. (22)

And now, by embedding (21) within (22) we obtain, for :

 c(Pnrk+1Pnrk+1…Pnk+1)<ε4. (23)

The previous integers and can also be chosen such that

 (24)

by using hypothesis .
Following ([6], Thm. 5.1), for we have

 n∑i=1∥πi−πi−1∥≤2log(Lh(1/Tk)Lh(1/Tn)). (25)

Thus by writing (25) with the adequate indexes and putting together the result in (24), it comes out that:

 nk+1−1∑l=nrk∥πl−πl−1∥<ε. (26)

Consider now two positive integers and such that and recall the notation, for

 P(m,k)=Pm+1Pm+2…Pk.

Let us prove by induction that

 πMP(M,N)−πN=N−1∑l=M(πl−πl+1)P(l,N). (27)

For the verification step of this relation consider fixed and take . This gives the following equality:

 πMP(M,M+1)−πM+1=(πM−πM+1)P(M,M+1),

which is verified since is a stationary distribution and thus .

Assuming now, that the relation (27) is verified for and fixed with , let us prove that (27) is also true for . This leads to:

 πMP(M,N+1)−πN+1=N∑l=M(πl−πl+1)P(l,N+1) (28)

which is equivalent to

 (πMP(M,N)−πN+1)PN+1=N−1∑l=M(πl−πl+1)P(l,N)PN+1+πNPN+1−πN+1PN+1,

that becomes

 (πMP(M,N)−πN)PN+1=N−1∑l=M(πl−πl+1)P(l,N)PN+1

and this is true since (28) is satisfied. We admit now that (27) is valid.

Let us now complete the proof of the Theorem 1.

Let and take such that . As is a stationary distribution, we have :

 ∥μm−πm∥=∥μnrk′′P(nrk′′,m)−πm∥≤∥(μnrk′′−πnrk′′)P(nrk′′,m)∥+∥πnrk′′P(nrk′′,m)−πm∥. (29)

In this last inequality the first term on the right hand side can be controlled by using (13) and (23)

 ∥(μnrk′′−πnrk′′)P(nrk′′,m)∥≤2c(P(nrk′′,m))≤2c(P(nrk′′,nk′′+1))<ε2. (30)

The second term can be expressed in the following form by using (27):

 ∥πnrk′′P(nrk′′,m)−πm∥≤m−1∑l=nrk′′∥(πl+1−πl)P(l,m)∥≤nk′′+2−1∑l=nrk′′∥πl−πl+1∥<ε2, (31)

by making use of the inequality (26). Combining now (30) and (31) in (29) gives finally for

 ∥μm−πm∥<ε.

As is arbitrary the result of the theorem follows. This concludes the proof.

As a consequence of Theorem 1, it is obtained:

###### Corollary 1.

Suppose that the conditions of Theorem 1 are satisfied and suppose also that we have the weak convergence where . Then we have also the weak convergence

 limj→+∞μj=π.

The following important result is also stated.

###### Corollary 2.

Suppose that the hypothesis of the Theorem 1 are satisfied for the simulated annealing process . Suppose also that achieve its maximum in . Then we have the weak convergence

 limj→+∞μj=δθopt.
###### Remark 1.

The first two conditions in Theorem 1 ensure the weak ergodicity of the SA process. The condition (i) is fulfilled for transition kernels built with rather general proposal densities with measurable ([6], Lemma 4.1, Thm. 4.2). The second condition (ii) if fulfilled if a logarithmic cooling schedule is used for the temperature

 Tj=Klog(j+2)withK>0,

which is similar to the cooling schedules obtained for maximising Markov random fields or marked point process probability densities [4, 16]. The third condition (iii) is a bound for the sum of distances between two equilibrium distributions that correspond to two different temperatures ([6], Thm. 3.2). It is the key condition, that whenever , it allows to reduce the weak convergence of the process to the weak convergence of the equilibrium distributions ([7], pp.44).

The previous results show that the ISA process given by (17) converges weakly towards the global optimum of the function (7). Still, these results cannot be transformed directly into an optimisation algorithm to be used in practice, due to the need of computation of the normalising constants . The next section shows how to overcome this drawback by building an alternative chain able to follow the path of the theoretical ISA process as close as desired.

## 4 Shadow Simulated Annealing process

The authors in [17] proposed an algorithm, ABC Shadow able to sample from posterior distributions (2). This section presents this sampling method, builds a SA process based on it and derives convergence results. This new process is called Shadow Simulated Annealing (SSA) process.

### 4.1 ABC Shadow sampling algorithm

The idea of the ABC Shadow algorithm [17] is to construct a shadow Markov chain able to follow the ideal Markov chain given by (16). The steps of the algorithm are given below.

###### Algorithm 1.

ABC Shadow : Fix and . Assume the observed pattern is and the current state is .

1. Generate according to .

2. For to do

• Generate a new candidate following the density defined by

 Uδ(θ→ψ)=1Vδ1b(θ,δ/2){ψ},

with the volume of the ball .

• The new state is accepted with probability given by

 αs(θk−1→θk)= (32) = min{1,p(θk|y)p(θk−1|y)×f(x|θk−1)ζ(θk)1b(θk,δ/2){θk−1}f(x|θk)ζ(θk−1)1b(θk−1,δ/2){θk}} = min{1,f(y|θk)p(θk)f(y|θk−1)p(θk−1)×f(x|θk−1)f(x|θk)}

otherwise .

3. Return .

4. If another sample is needed, go to step with .

The transition kernel of the Markov chain simulated by the previous algorithm is given by, for every

 Ps(θ,A) = ∫Aαs(θ→ψ)Uδ(θ→ψ)1{ψ∈A}dψ + 1{θ∈A}[1−∫Aαs(θ→ψ)Uδ(θ→ψ)dψ].

The authors in [17] show also that since

 |Pi(θ,A)−Ps(θ,A)|≤K1δ

with a constant depending on and , there exists such that for every , we have

 |Pmi(θ,A)−Pms(θ,A)|<ε

uniformly in and . If , then a description of can be provided.

The previous results state that for any and a given we may find a such that the shadow and ideal chain may evolve as close as desired during a pre-fixed value of steps. Under these assumptions, if the Algorithm 1 does not follow closely the ideal chain started in , anymore. The ergodicity properties of the ideal chain and the triangle inequalities allow to give a bound for the distance the distance after steps, between the shadow transition kernel and the equilibrium regime

 ∥P(m)s(θ,⋅)−π(⋅)∥≤M(x,δ)ρm+ε.

with .

Iterating the algorithm more steps it is possible by re-freshing the auxiliary variable. This mechanism allows to re-start the algorithm for steps more, and by this, to obtain new samples of the approximate distribution of the posterior.

### 4.2 An SA process based on the shadow chain

#### 4.2.1 Process construction and properties

###### Theorem 2.

Let be an ISA process associated with the ideal transition kernel  (4) that samples from with given by (7) using the proposals (6). The cooling schedule is chosen with respect to the Theorem 1. According to the Algorithm 1, to each a shadow chain  (4.1) is attached. Then, there exists a sequence such that

 |Pmi,j(θ,A)−Pms,j(θ,A)|<ε

for all , uniformly in and . If , then a description of can be provided.

Proof: The proof is obtained by considering for each step  of the algorithm, the Proposition 1 in [17].

The previous process given by the sequence the shadow chains is named the Shadow Stochastic Annealing (SSA) process. Controlling the parameter in the same time with the temperature allows to obtain the following convergence properties.

###### Theorem 3.

Let us consider the assumptions of Theorem 2 fulfilled and let be a fixed value. Then, for the SSA process , the following results hold :

1. [label=()]

2. For each we have and also for big enough .

3. Consider now for , . For each we can construct such that and

Proof: Theorem 2 proves that there exists a sequence such that

 |Pmi,j(θ,A)−Pms,j(θ,A)|<ε

for all , uniformly in and .

Combining this with the result of the Corollary 2 allows to conclude.

We end the demonstration by the following observation : if , then a description of can be provided.

###### Remark 2.

The first part of the preceding result can be interpreted as follows. Let be a realisation of the SSA process. Then there exists a sequence corresponding to each such that for

 {θj,j≥jmax}⊂b(θopt,ε)

where is the global optimum of the function .

###### Corollary 3.

We have

 δj=K(xj,Tj,nj)j (33)

and

###### Remark 3.

The second part of the preceding result states that to a decreasing sequence of balls around the problem solution, it is possible to associate a sequence of parameters in order to get as close as desired to the global optimum of .

#### 4.2.2 A new algorithm for global optimisation

The previous results justify the construction of the following algorithm.

###### Algorithm 2.

Shadow Simulated Annealing (SSA) algorithm : fix , , and two positive functions. Assume the observed pattern is and the current state is .

1. Generate according to .

2. For to do

• Generate a new candidate following the density defined by

 Uδ(θ→ψ)=1Vδ1b(θ,δ/2){ψ},

with the volume of the ball .

• The new state is accepted with probability given by

 αs(θk−1→θk)= (34) = min⎧⎨⎩1,[p(θk|y)p(θk−1|y)×f(x|θk−1)f(x|θk]1/T×1b(θk,δ/2){θk−1}1b(θk−1,δ/2){θk}⎫⎬⎭ = min⎧⎨⎩1,[f(y|θk)p(θk)f(y|θk−1)p(θk−1)×f(x|θk−1)f(x|θk)]1/T⎫⎬⎭

otherwise .

3. Return .

4. Stop the algorithm or go to step with , and .

It is easy to see that the SSA algorithm is identical to Algorithm 1 whenever and remain unchanged. The SSA algorithm does not have access at the states issued from the ideal chain. When the algorithm is started, the distance between the ideal and the shadow chain depends on the initial conditions. Still, independently of these conditions, as the control parameters and evolve, this distance evolves also, by approaching zero.

## 5 Applications

This section illustrates the application of the SSA algorithm. The next part of this section applies the present method for estimating the model parameters of three point processes: Strauss, area-interaction and Candy model. All these models are widely applied in domains such environmental sciences, image analysis and cosmology [26, 12, 27, 14, 15, 18, 23, 21]. Here the parameter estimation is done on simulated data and it is double aimed. The first purpose is to test the method on complicated models, that does not exhibit a closed analytic form for their normalising constants. The second one is to give to the potential user, some hints regarding the tuning of the algorithm. The last part of this section is dedicated to real data application: model fitting for the galaxy distribution in our Universe.

### 5.1 Simulated data: point patterns and segment networks

The SSA Shadow algorithm is applied here to the statistical analysis of patterns which are simulated from a Strauss model [9, 19]. This model describes random patterns made of points exhibiting repulsion. Its probability density with respect to the standard unit rate Poisson point process is

 p(y|θ) ∝βn(y)γsr(y) =exp[n(y)logβ+sr(y)logγ]. (35)

Here is a point pattern in the finite window , while and are the sufficient statistic and the model parameter vectors, respectively. The sufficient statistics components and represent respectively, the number of points in and the number of pairs of points at a distance closer than .

The Strauss model on the unit square and with density parameters , and , was considered. This gives for the parameter vector of the exponential model . The CFTP algorithm (see Chapter 11 in [12]) was used to get samples from the model and to compute the empirical means of the sufficient statistics . The SA based on the ABC Shadow algorithm was run using as observed data, while considering the parameter known.

The prior density was the uniform distribution on the interval . Each time, the auxiliary variable was sampled using steps of a MH dynamics [26, 12]. The and parameters were set to and , respectively. The algorithm was run for iterations. The initial temperature was set to . For the cooling schedule a slow polynomial scheme was chosen

 Tn=kT⋅Tn−1

with . A similar scheme was chosen for the parameters, with . Samples were kept every steps. This gave a total of samples.

The choice of the uniform prior was motivated by the fact that the posterior distribution using an uniform prior equals the likelihood distribution restricted to the domain of availability of the uniform distribution. Hence, following [1, 26, 12] and the argument in [17], since the ML estimate approaches almost surely the true model parameters whenever the number of samples increases, the expected results of the SSA algorithm should be close to the model parameters used for its simulation.

Figure 1 shows the time series of the SSA algorithm outputs applied to estimate the parameters of the previous Strauss process. The final values obtained for and were and , respectively. The Table 1

presents the empirical quartiles computed from the outputs of the algorithm. All these values are close to the true model parameters.

The area-interaction process introduced by [2] is able to describe point patterns exhibiting clustering or repulsion. Its probability density with respect to the standard unit rate Poisson point process is

 p(y|θ)∝βn(y)γar(y)=exp[n(y)logβ+ar(y)logγ].

with

 ar(y)=−ν[∪ni=1b(yi,r)]πr2.

The model vectors of the sufficient statistics and parameters are