# Continuous time Gaussian process dynamical models in gene regulatory network inference

One of the focus areas of modern scientific research is to reveal mysteries related to genes and their interactions. The dynamic interactions between genes can be encoded into a gene regulatory network (GRN), which can be used to gain understanding on the genetic mechanisms behind observable phenotypes. GRN inference from time series data has recently been a focus area of systems biology. Due to low sampling frequency of the data, this is a notoriously difficult problem. We tackle the challenge by introducing the so-called continuous-time Gaussian process dynamical model (GPDM), based on Gaussian process framework that has gained popularity in nonlinear regression problems arising in machine learning. The model dynamics are governed by a stochastic differential equation, where the dynamics function is modelled as a Gaussian process. We prove the existence and uniqueness of solutions of the stochastic differential equation. We derive the probability distribution for the Euler discretised trajectories and establish the convergence of the discretisation. We develop a GRN inference method based on the developed framework. The method is based on MCMC sampling of trajectories of the GPDM and estimating the hyperparameters of the covariance function of the Gaussian process. Using benchmark data examples, we show that our method is computationally feasible and superior in dealing with poor time resolution.

## Authors

• 3 publications
• 5 publications
• 6 publications
• 16 publications
• ### Learning interpretable continuous-time models of latent stochastic dynamical systems

We develop an approach to learn an interpretable semi-parametric model o...
02/12/2019 ∙ by Lea Duncker, et al. ∙ 0

• ### Inferring the unknown parameters in Differential Equation by Gaussian Process Regression with Constraint

Differential Equation (DE) is a commonly used modeling method in various...
11/22/2020 ∙ by Ying Zhou, et al. ∙ 0

• ### Bayesian variable selection in linear dynamical systems

We develop a method for reconstructing regulatory interconnection networ...
02/15/2018 ∙ by Atte Aalto, et al. ∙ 0

• ### Iterative Statistical Linear Regression for Gaussian Smoothing in Continuous-Time Non-linear Stochastic Dynamic Systems

This paper considers approximate smoothing for discretely observed non-l...
05/29/2018 ∙ by Filip Tronarp, et al. ∙ 0

• ### Variational Bridge Constructs for Grey Box Modelling with Gaussian Processes

This paper introduces a method for inference of heterogeneous dynamical ...
06/21/2019 ∙ by Wil O. C. Ward, et al. ∙ 5

• ### Actively Learning Gaussian Process Dynamics

Despite the availability of ever more data enabled through modern sensor...
11/22/2019 ∙ by Mona Buisson-Fenet, et al. ∙ 0

• ### Convergence of the Continuous Time Trajectories of Isotropic Evolution Strategies on Monotonic C^2-composite Functions

The Information-Geometric Optimization (IGO) has been introduced as a un...
06/21/2012 ∙ by Youhei Akimoto, et al. ∙ 0

##### This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

## 1. Introduction

In 2017, Jeffrey C. Hall, Michael Rosbash, and Michael W. Young were awarded the Nobel prize in physiology or medicine for their discoveries of molecular mechanisms controlling the circadian rhythm of plants. Indeed, one of the focus areas of modern scientific research is to reveal mysteries related to genes, their interactions, and their connection to observable phenotypes. Application areas of analysing interactions of genes are not limited to plants and their circadian clocks. In the field of biomedicine, for example, knowledge of genes and their interactions plays a crucial role in prevention and cure of diseases.

Interactions between genes are typically represented as a gene regulatory network (GRN) whose nodes correspond to different genes, and a directed edge denotes a direct causal effect of some gene on another gene. The usual problem statement is to infer the network topology from given gene expression data. Classically, GRN inference has been based on analysing steady state data corresponding to gene knockout experiments, based on silencing one gene and observing changes in the steady state expressions of other genes. However, carrying out knockout experiments on a high number of genes is costly and technically infeasible. Moreover, for example circadian clocks of plants are oscillating systems, and in practice it can be difficult to determine whether a particular measurement corresponds to a steady state. In contrast, methods that infer GRNs from time series data can infer the networks on a more global level using data from few experiments. Therefore GRN inference from time series data has gained more and more attention recently. This article presents BINGO (Bayesian Inference of Networks using Gaussian prOcess dynamical models). BINGO is designed for network inference mainly from time series data, but steady state measurements can be straightforwardly implemented as well.

We model the time series data as samples from a continuous trajectory,

 (1.1) yj=x(tj)+vj

where represents measurement noise. The continuous trajectory is assumed to satisfy a nonlinear stochastic differential equation

 (1.2) dx=f(x)dt+dw,

where is some driving process noise. Here is an -valued function, and thus also

is vector-valued,

 f(x)=[f1(x1,...,xn)⋮fn(x1,...,xn)].

If function depends on , in the corresponding GRN there is a link from gene to gene . The task in GRN inference is to discover this regulatory interconnection structure between variables. It is known that the dynamics of one gene can only be influenced by few other genes, that is, a component only depends on some components of .

What is characteristic to the GRN inference problem is that the data tends to be rather poor in terms of temporal resolution and the overall amount of data. Low temporal resolution has a deteriorating effect on network inference—in linear systems ( in (1.2)) this is illustrated by the fact that matrices and do not share even approximatively the same sparsity pattern when is big. In addition, many methods use derivatives estimated directly from the time series data using difference approximations or curve fitting techniques. Using different techniques can significantly effect the results, and therefore avoiding such derivative approximations is desirable.

In this article, we introduce a continuous-time version of the so-called Gaussian process dynamical model (GPDM) [43]. Essentially, the dynamics function in (1.2) is modelled as a Gaussian process with some covariance function [36]. This defines as a stochastic process. We prove existence and uniqueness of solutions of the corresponding differential equation, and the convergence of the Euler discretisation. We derive a probability distribution for the discretised trajectory, enabling direct MCMC sampling of realisations of this process. This way the derivative approximations are avoided. This trajectory sampling is illustrated in Figure 1, showing samples from , where represents hyperparameters, and comprises the time series measurements. Then is the measurement model arising from (1.1), and is the probability distribution for the trajectory , arising from (1.2). Network inference is then done by estimating the hyperparameters of the chosen covariance function. The technique of performing variable selection based on estimating variable-specific hyperparameters is known as automatic relevance determination (ARD) [24, 28]. In our approach, missing measurements as well as non-constant sampling frequency are easy to treat, and these functionalities are already implemented in the code, as well as a possibility to include prior information on the network. In the end, we are mainly interested in which of the variable-specific hyperparameters are non-zero. Therefore we introduce an indicator variable matrix for these hyperparameters, that can also be interpreted as the adjacency matrix of the GRN. The pipeline of BINGO is illustrated in Figure 2. We demonstrate that BINGO is superior in dealing with poor time resolution while still remaining computationally feasible.

Linear version of the method is presented in [1], where the MCMC samplers for the trajectory and the network topology are introduced. Other methods that model the dynamics function using a Gaussian process are presented in [3, 20, 31, 32]. The main difference between BINGO and these older methods is that they all treat the problem as a nonlinear regression problem with input-output pairs [3]

 {(yj−1,yj−yj−1tj−tj−1)}mj=1

or using some other method to approximate the derivatives from the time series data. As mentioned above, we avoid this derivative estimation by fitting continuous-time trajectories to the time series data. It should be noted that the GP framework can handle combinatorial effects, meaning nonlinearities that cannot be decomposed into . This is an important property for modelling a chemical system—such as gene expression—where reactions can happen due to combined effects of reactant species. For example, the dynamics corresponding to a chemical reaction cannot be modelled by .

Several GRN inference problems from different types of data have been posed as competitive challenges by the Dialogue for Reverse Engineering Assessments and Methods (DREAM) project. Results and conclusions, as well as some top performers are introduced in [26, 18]. The inference problems are based mainly on two types of data, namely time series data, and steady state measurements corresponding to gene knockout or knockdown experiments. A review [32]

on methods based on time series data concluded that methods based on nonparametric nonlinear differential equations performed best. Such methods include Gaussian process models and random forest models

[19]. Other types of ODE models tend to make rather restrictive assumptions on the dynamics, such as linear dynamics [1, 21, 4], or a library of nonlinear functions [6, 8, 30, 25]. Mechanistic models [2, 29] try to fit the data using dynamical systems constructed from enzyme kinetics equations.

The rest of the article is organised as follows. In Section 2, we introduce the continuous-time Gaussian process dynamical models, and we prove the existence and uniqueness of solutions, and the convergence of the Euler discretisation. This is applied in Section 3 where we derive the probability distribution for the discretised trajectory. The full network inference method BINGO is introduced in Section 4 including incorporating several time series and knockout/knockdown experiments. Section 5 is devoted to benchmark data experiments. We apply BINGO to the DREAM4 In Silico Network Challenge data [26, 27, 35, 39] using either the time series data only, or including also the steady state experiments. The method is compared with the best performers in the DREAM4 challenge, as well as with more recent methods. Moreover, BINGO’s performance with low sampling frequency is demonstrated by performing network inference on simulated data of the circadian clock of Arabidopsis thaliana [34], using different sampling frequencies. Finally, to demonstrate BINGO’s performance using real data, it has been applied to the IRMA in vivo dataset [7], which is obtained from a synthetic network of five genes, and therefore the ground truth network is known. In Section 6, we provide a short summary and discuss future prospects.

## 2. Continuous-time Gaussian process dynamical models

Discrete-time GPDMs (a.k.a. Gaussian process state space models [12, 11]) were originally introduced in [43], whose treatment was based on the GP latent variable models [23]. They are an effective tool for analysing time series data that is produced by a dynamical system that is unknown to us, or somehow too complicated to be presented using classical modelling techniques. In the original paper [43], the method was used for human motion tracking from video data. Motion tracking problems remain the primary use of GPDMs [9, 13], but other types of applications have emerged as well, such as speech analysis [17], traffic flow prediction [44], and electric load prediction [16].

In this section we study theoretical properties of the continuous time GPDM trajectory defined as the solution on for some fixed to the stochastic differential equation

 (2.1) dxt=f(ut,xt,ω)dt+dwt,x(0)=x0,

where the initial state

for some covariance matrix , is a smooth deterministic input function, and is an -dimensional Brownian motion with diagonal covariance matrix . Finally, where each component conditioned on a trajectory is modelled as a Gaussian process. For simplicity, we assume that each is centred (see Remark 2.2) and has a covariance depending on (input variable) and (state variable). That is, and .

###### Remark 2.1.

By Mercer’s theorem, each covariance can be represented as

 k(u,v,x,z)=∞∑k=1λ2kϕk(u,x)ϕk(v,z).

Hence a Gaussian with covariance can be modelled by

 f(u,x)=∞∑k=1ϕk(u,x)ξk,

where are mutually independent. From this it is clear that for given , is Gaussian, whereas for random , it is usually not.

Throughout the article we make the following assumption on the covariances .

###### Assumption 2.1.

For every , there exists a constant such that

 |ki(ut,ut,x,x)−ki(ut,ut,x,z)|≤Li|x−z|2

uniformly in .

###### Example 2.1.

The GRN inference algorithm developed below is based on the squared exponential covariance functions

 (2.2) ki(x,z)=γiexp(−n∑j=1βi,j(xj−zj)2)

and estimating the hyperparameters . The hyperparameters satisfy and . If , it indicates that gene is a regulator of gene . The Assumption 2.1 is satisfied with the constant .

Before stating and proving existence and uniqueness result for (2.1) we need one technical lemma.

###### Lemma 2.1.

Suppose that Assumption 2.1 holds and let be arbitrary. Then for any there exists a constant depending on and the numbers such that

 E|f(ut,x,ω)−f(ut,z,ω)|p≤C|x−z|p.
###### Proof.

Since is a Gaussian vector, it suffices to prove the claim only for . Furthermore, by triangle inequality, it suffices to prove that for each component we have

 E|fi(ut,x,ω)−fi(ut,z,ω)|2≤C|x−z|2.

Now

 E|fi(ut,x,ω)−fi(ut,z,ω)|2=ki(ut,ut,x,x)+ki(ut,ut,z,z)−2k(ut,ut,x,z),

and Assumption 2.1 implies

 E|fi(ut,x,ω)−fi(ut,z,ω)|2≤2|x−z|2

which concludes the proof. ∎

###### Corollary 2.1.

Suppose that Assumption 2.1 holds and let

be random variables. Then for any

there exists a constant depending on and the numbers such that

 E(|f(ut,x,ω)−f(ut,z,ω)|p|x,z)≤C|x−z|p.
###### Proof.

The claim follows from Lemma 2.1 together with the fact that conditioned on and is Gaussian with covariance . ∎

The following existence and uniqueness result for the stochastic differential equation (2.1) justifies the use of the continuous-time GPDM model.

###### Theorem 2.1.

Suppose that Assumption 2.1 is satisfied. Then (2.1) admits a unique solution .

###### Proof.

We use Picard iteration and define

 x0t=x0,

and for we set

 xjt=x0+∫t0f(us,xj−1s,ω)ds+wt−w0.

Then

 xjt−xj−1t=∫t0f(us,xj−1s,ω)−f(us,xj−2s,ω)ds

and

 (2.3) |xjt−xj−1t|≤∫t0|f(us,xj−1s,ω)−f(us,xj−2s,ω)|ds.

Taking expectation, conditioning, and using Corollary 2.1 then gives

 (2.4) E|xjt−xj−1t|≤C∫t0E|xj−1s−xj−2s|ds.

We now claim that

 E|xjt−xj−1t|≤C1Cjtjj!+C2Cj−1tj−1(j−1)!.

This follows by induction. For we have

 |x1t−x0|=∣∣∣∫t0f(us,x0)ds+wt−w0∣∣∣≤sups∈[0,T]|f(us,x0)|t+sups∈[0,T]|wt|

which proves the claim for as the supremum of Gaussian process and the supremum of

have all moments finite. Suppose

 E|xjs−xj−1s|≤C1Cjsjj!+C2Cj−1sj−1(j−1)!.

Then (2.4) implies

 E|xj+1t−xjt|≤∫t0C1Cj+1sjj!+C2Cjsj(j−1)!ds=C1Cj+1tj+1(j+1)!+C2Cjtjj!.

In particular, this gives

 supt∈[0,T]E|xj+1t−xjt|≤C1Cj+1Tj+1(j+1)!+C2CjTjj!→0

and

 ∞∑j=0supt∈[0,T]E|xj+1t−xjt|<∞.

On the other hand, from (2.3) we get

 supt∈[0,T]|xjt−xj−1t|≤∫T0|f(us,xj−1s,ω)−f(us,xj−2s,ω)|ds.

Consequently, taking expectation gives

 E[supt∈[0,T]|xjt−xj−1t|]≤CTsupt∈[0,T]E|xj−1t−xj−2t|,

and thus we also have

 ∞∑j=0E[supt∈[0,T]|xj+1t−xjt|]<∞.

This implies that

 k∑j=0(xj+1t−xjt)=xk+1t−x0

converges uniformly to an integrable random variable. Finally, since is continuous in by Gaussianity and Lemma 2.1, we observe that the limit satisfies (2.1). ∎

###### Remark 2.2.

We stress that while we assumed the Gaussian process to be centred for the sake of simplicity, the extension to a non-centred case is rather straightforward. Indeed, if for each component the mean function is Lipschitz continuous with respect to uniformly in , i.e.

 |mi(ut,x)−mi(ut,z)|≤L|x−z|,

then the existence and uniqueness follows from the above proof by centering first. We leave the details to the reader.

The following result studies the basic properties of the solution.

###### Theorem 2.2.

Suppose that Assumption 2.1 holds. Then the solution to (2.1) is Hölder continuous of any order . Furthermore, has all the moments finite.

###### Proof.

Clearly, each in the proof of Theorem 2.1 is continuous. Consequently, the solution is continuous as a uniform limit of continuous trajectories. The Hölder continuity then follows from (2.1) and the Hölder continuity of the Brownian motion . Indeed, since is continuous in and is bounded as a continuous function on a bounded interval , it follows that is also bounded. Finally, the existence of all moments follow from the fact that has all the moments finite as well as has all the moments finite. ∎

The method’s numerical implementation will be based on the Euler discretised equation (2.1). Define therefore a partition of the compact interval of interest . Denote the discretised trajectory corresponding to the partition by , and recall that its dynamics are given by

 (2.5) XMτk=XMτk−1+δτkf(uτk−1,XMτk−1,ω)+wτk−wτk−1

where , and . Later, we will obtain a probability distribution for the discrete trajectory , but first we show the pointwise (in ) convergence to the continuous solution of (2.1) as the temporal discretisation is refined.

We study the continuous version defined for by

 (2.6) ¯¯¯¯¯XMt=¯¯¯¯¯XMτk−1+(t−τk−1)f(uτk−1,¯¯¯¯¯XMτk−1,ω)+wt−wτk−1.

Note that for all .

###### Theorem 2.3.

Suppose that Assumption 2.1 holds and consider arbitrary discretisation such that , where . Then for any

 E[supt∈[0,T]|xt−¯¯¯¯¯XMt|]p≤C|πM|p.

Moreover, for any we have

 supt∈[0,T]|xt−¯¯¯¯¯XMt|≤C|πM|1−ϵ.

almost surely.

###### Proof.

Let and denote

 zk=∥∥ ∥∥supt∈[τk−1,τk]|xt−¯¯¯¯¯XMt|∥∥ ∥∥p,

where denotes the -norm. Now

 xt−¯¯¯¯¯XMt=xτk−1−¯¯¯¯¯XMτk−1+∫tτk−1f(us,xs,ω)−f(uτk−1,¯¯¯¯¯XMτk−1,ω)ds.

As in the proof of Theorem 2.1, this implies

 zk≤zk−1+∫τkτk−1∥∥∥f(us,xs,ω)−f(uτk−1,¯¯¯¯¯XMτk−1,ω)∥∥∥pds≤zk−1+Czk|πM|.

Let now be large enough such that . We get

 (1−C|πM|)zk≤zk−1

or equivalently

 zk≤11−C|πM|zk−1.

Iterating then gives

 zk≤(11−C|πM|)kz1=(1+C|πM|−1−C)kz1.

Note next that

 (1+C|πM|−1−C)k≤(1+~C|πM|−1)M=(1+~C|πM|−1)|πM|−1|πM|M

for some other constant . Since

 (1+~C|πM|−1)|πM|−1→e~C

as and is bounded by assumption, it follows that

 zk≤Cz1

for some unimportant constant . But now

 |xt−¯¯¯¯¯XMt|≤∫τ10|f(us,xs,ω)−f(uτk−1,¯¯¯¯¯XMτk−1,ω)|ds≤2sups∈[0,T]|f(us,xs,ω)||πM|

for from which it follows that proving the first claim. Finally, the second claim is a direct consequence of the Borel-Cantelli lemma. ∎

## 3. Probability distribution of the discretised trajectory

The probability distribution is now derived for the discrete trajectory , where denotes collectively all the hyperparameters. The discretisation level index is dropped now, since we only use one discretisation level from now on. It holds that

 (3.1) p(X|θ)=∫p(X|f,θ)p(f|θ)df.

For given , the trajectory is a Markov process, and therefore its distribution satisfies

 p(X|f,θ)=p(Xτ0|θ)M∏k=1p(Xτk|Xτk−1,f,θ).

Let us introduce notation and . Same notation is also used for the different dimensions of the trajectory. Then it holds that

 p(X|f,θ) =p(Xτ0|θ)(2π)Mn/2|Q|M/2M∏k=11δτn/2kexp(−12δτk∣∣Xτk−Xτk−1−δτkf(Xτk−1)∣∣2Q−1) =p(Xτ0|θ)(2π)Mn/2|Q|M/2|Δτ|n/2exp(−M∑k=112δτk∣∣Xτk−Xτk−1−δτkf(Xτk−1)∣∣2Q−1) =p(Xτ0|θ)(2π)Mn/2|Q|M/2|Δτ|n/2n∏i=1exp(−12qi∣∣¯¯¯¯¯Xi−X––i−Δτfi(X––)∣∣2Δτ−1)

where is a diagonal matrix whose element is , and .

Now in the integral (3.1) depends only on the values of at points . By definition of a Gaussian process, the integral can equivalently be computed over a collection of finite-dimensional, normally distributed random variables where has mean zero, and covariance given elementwise by . The integral in (3.1) can be computed analytically (see Appendix B),

 ∫p(X|f,θ)p(f|θ)df =p(Xτ0|θ)(2π)Mn|Q|M/2|Δτ|n/2 n∏i=1∫1|Ki(X––)|1/2exp(−12qi∣∣¯¯¯¯¯Xi−X––i−ΔτFi∣∣2Δτ−1−12|Fi|2K(X––)−1)dFi =p(Xτ0|θ)(2π)Mn/2|Q|M/2|Δτ|n/2n∏i=11|Ki(X––)|1/2∣∣Δτqi+Ki(X––)−1∣∣1/2 exp(−12qi∣∣¯¯¯¯¯Xi−X––i∣∣2Δτ−1+12q2i(¯¯¯¯¯Xi−X––i)⊤(Δτqi+Ki(X––))−1(¯¯¯¯¯Xi−X––i)).

Applying the Woodbury identity to the exponent gives

 (qiΔτ)−1−1qi(Δτ(qiΔτ)−1Δτ+Ki(X––)−1)1qi=(ΔτKi(X––)Δτ+qiΔτ)−1,

and the determinant lemma gives (recall is a diagonal matrix with ’s on the diagonal)

 |Q|M/2|Δτ|n/2n∏i=1|Ki(X––)|1/2∣∣∣Δτqi+Ki(X––)−1∣∣∣1/2 =n∏i=1|qiΔτ||Ki(X––)|1/2∣∣Δτ(qiΔτ)−1Δτ+Ki(X––)−1∣∣ =n∏i=1∣∣ΔτKi(X––)Δτ+qiΔτ∣∣1/2.

Finally, the desired probability distribution is

 (3.2) p(X|θ)=p(Xτ0|θ)(2π)Mn/2n∏i=11|ΔτKi(X––)Δτ+qiΔτ|1/2 ⋅exp(−12(¯¯¯¯¯Xi−X––i)⊤(ΔτKi(X––)Δτ+qiΔτ)−1(¯¯¯¯¯Xi−X––i)).

Note that above it was implicitly assumed that the covariance is positive definite. This assumption is only violated if for some or if the covariance function is degenerate. In this case the integral should be computed over a lower-dimensional variable , but the end result would not change.

Note also that (3.2) corresponds to the finite dimensional distribution of the continuous Euler scheme (2.6) evaluated at discretisation points. Since (2.6) converges strongly to the solution of (2.1), the finite dimensional distributions converge as well. This means that (3.2) is a finite dimensional approximation of the distribution of .

## 4. Network inference method

Consider then the original problem, that is, estimating the hyperparameters from given time series data. Denote where is assumed to be a noisy sample from the continuous trajectory , that is, , and is a Gaussian noise with zero mean and covariance , and when . We intend to draw samples from the parameter posterior distribution using an MCMC scheme. Therefore, we only need the posterior distribution up to constant multiplication. Denoting the hyperparameters collectively by , the hyperparameter posterior distribution is

 p(θ|Y)∝p(Y,θ)=∫p(Y,x,θ)dx=∫p(Y|x,θ)p(x|θ)p(θ)dx.

Here is the Gaussian measurement error distribution, will be approximated by (3.2) for the discretised trajectory , and is a prior for the hyperparameters. This prior consists of independent priors for each parameter. The integration with respect to the trajectory is done by MCMC sampling. In the network inference algorithm, we consider only the squared exponential covariance function (2.2). The function has mean

 mi(x)=bi−aixi

where and are regarded as nonnegative hyperparameters corresponding to basal transcription () and mRNA degradation ().

For sampling the hyperparameters in the squared exponential covariance function (2.2), we introduce an indicator variable as in [1]. That is, each hyperparameter is a product , where and . The state of the sampler consists of the indicator variable , the hyperparameters () , , , , , and the discrete trajectory . They are sampled using a Gibbs sampler (or more precisely, Metropolis–Hastings within Gibbs sampler) as described below.

For the Gibbs sampler, notice that given in (3.2) is readily factorised in form

 (4.1) p(X|θ)=p(Xτ0|θ)(2π)Mn/2n∏i=1Pi(Si,Hi,γi,qi,ai,bi,X).

This factorisation makes it natural to sample , , , , and one dimension at a time. However, each factor still depends on the full trajectory , so the trajectory sampling is done separately. Also, when using the Crank–Nicolson sampling (see Section A.2), the sampling of is intertwined with the trajectory sampling, so they are sampled together. This two-phase sampling scheme is described in the following algorithm. Here the algorithm is presented in its basic form. Some ways to make the sampling more efficient are presented in Appendix A. We assume that the initial time coincides with the time of the first measurement , so that . In the algorithm, this is included in the data fit term .

###### Algorithm 4.1.

Denote the samples by parenthesised superindex, e.g., is the trajectory of the sample. The proposal samples are denoted by a hat.

Indicator and hyperparameter sampling:

For :

• Sample the row of by drawing

from uniform distribution over

. Then

 ^Si,j=⎧⎨⎩S(l)i,j,ifj≠^j,1−S(l)i,j,ifj=^j.
• Sample , , , and using random walk sampling, that is, add small changes to each component, drawn from zero-mean normal distribution. If the proposal sample is negative, take its absolute value.

• Accept the proposal samples with probability

 Pi(^Si,^Hi,^γi,q(l)i,^ai,^bi,X(l))p(^Si,^Hi,^γi,q(l)i,^ai,^bi)Pi(S(l)i,H(l)i,γ(l)i,q(l)i,a(l)i,b(l)i,X(l))p(S(l)i,H(l)i,γ(l)i,q(l)i,a(l)i,b(l)i)

where is the hyperparameter prior, and the factors are defined in (4.1).

• Sample with random walk sampling, with acceptance probability

 p(^R)|