Non-parametric Bayesian modelling of digital gene expression data

01/17/2013 ∙ by Dimitrios V. Vavoulis, et al. ∙ 0

Next-generation sequencing technologies provide a revolutionary tool for generating gene expression data. Starting with a fixed RNA sample, they construct a library of millions of differentially abundant short sequence tags or "reads", which constitute a fundamentally discrete measure of the level of gene expression. A common limitation in experiments using these technologies is the low number or even absence of biological replicates, which complicates the statistical analysis of digital gene expression data. Analysis of this type of data has often been based on modified tests originally devised for analysing microarrays; both these and even de novo methods for the analysis of RNA-seq data are plagued by the common problem of low replication. We propose a novel, non-parametric Bayesian approach for the analysis of digital gene expression data. We begin with a hierarchical model for modelling over-dispersed count data and a blocked Gibbs sampling algorithm for inferring the posterior distribution of model parameters conditional on these counts. The algorithm compensates for the problem of low numbers of biological replicates by clustering together genes with tag counts that are likely sampled from a common distribution and using this augmented sample for estimating the parameters of this distribution. The number of clusters is not decided a priori, but it is inferred along with the remaining model parameters. We demonstrate the ability of this approach to model biological data with high fidelity by applying the algorithm on a public dataset obtained from cancerous and non-cancerous neural tissues.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

Code Repositories

dgeclust

Hierarchical Non-Parametric Bayesian Clustering of Digital Expression Data


view repo
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

It is a common truth that our knowledge in Molecular Biology is only as good as the tools we have at our disposal. Next-generation or high-throughput sequencing technologies provide a revolutionary tool in the aid of genomic studies by allowing the generation, in a relatively short time, of millions of short sequence tags, which reflect particular aspects of the molecular state of a biological system. A common application of these technologies is the study of the transcriptome, which involves a family of methodologies, including RNA-seq ([25]), CAGE (Cap Analysis of Gene Expression; [19]) and SAGE (Serial Analysis of Gene Expression; [22]). When compared to microarrays, this class of methodologies offers several advantages, including detection of a wider level of expression levels and independence on prior knowledge of the biological system, which is required by hybridisation-based technologies, such as microarrays.

Typically, an experiment in this category starts with the extraction of a snapshot RNA sample from the biological system of interest and its shearing in a large number of fragments of varying lengths. The population of these fragments is then reversed-transcribed to a cDNA library and sequenced on a high-throughput platform, generating large numbers of short DNA sequences known as “reads”. The ensuing analysis pipeline starts with mapping or aligning these reads on a reference genome. At the next stage, the mapped reads are summarised into gene-, exon- or transcript-level counts, normalised and further analysed for detecting differential gene expression (see [15] for a review).

It is important to realize that the normalised read (or tag) count data generated from this family of methodologies represents the number of times a particular class of cDNA fragments has been sequenced, which is directly related to their abundance in the library and, in turn, the abundance of the associated transcripts in the original sample. Thus, this count data is essentially a discrete or digital measure of gene expression, which is fundamentally different in nature (and, in general terms, superior in quality) from the continuous fluorescence intensity measurements obtained from the application of microarray technologies. Due to their better quality, next-generation sequence assays tend to replace microarray-based technologies, despite their higher cost ([4]).

One approach for the analysis of count data of gene expression is to transform the counts to approximate normality and then apply existing methods aimed at the analysis of microarrays (see for example, [21, 5]). However, as noted in [13]

, this approach may fail in the case of very small counts (which are far from normally distributed) and also due to the strong mean-variance relationship of count data, which is not taken into account by tests based on a normality assumption. Proper statistical modelling and analysis of count data of gene expression requires novel approaches, rather than adaptation of existing methodologies, which aimed from the beginning at processing continuous input.

Formally, the generation of count data using next-generation sequencing assays can be thought of as random sampling of an underlying population of cDNA fragments. Thus, the counts for each tag describing a class of cDNA fragments can, in principle, be modelled using the Poisson distribution, whose variance is, by definition, equal to its mean. However, it has been shown that, in real count data of gene expression, the variance can be larger that what is predicted by the Poisson distribution (

[12, 17, 18, 14]). An approach that accounts for the so-called “over-dispersion” in the data is to adopt quasi-likelihood methods, which augment the variance of the Poisson distribution with a scaling factor, thus by-passing the assumption of equality between the mean and variance ([2, 20, 24, 11]

). An alternative approach is to use the Negative Binomial distribution, which is derived from the Poisson, assuming a Gamma-distributed rate parameter. The Negative Binomial distribution incorporates both a mean and a variance parameter, thus modelling over-dispersion in a natural way (

[1, 7, 13]). An overview of existing methods for the analysis of gene expression count data can be found in [15] and [10]

Despite the decreasing cost of next-generation sequencing assays (and also due to technical and ethical restrictions), digital datasets of gene expression are often characterised by a small number of biological replicates or no replicates at all. Although this complicates any effort to statistically analyse the data, it has led to inventive attempts at estimating as accurately as possible the biological variability in the data given very small samples. One approach is to assume a locally linear relationship between the variance and the mean in the Negative Binomial distribution, which allows estimating the variance by pooling together data from genes with similar expression levels ([1]). Alternatively, one can make the rather restrictive assumption that all genes share the same variance, in which case the over-dispersion parameter in the Negative Binomial distribution can be estimated from a very large set of datapoints ([17]). A further elaboration of this approach is to assume a unique variance per gene and adopt a weighted-likelihood methodology for sharing information between genes, which allows for an improved estimation of the gene-specific over-dispersion parameters ([13]). Another yet distinct empirical Bayes approach is implemented in the software baySeq, which adopts a form of information sharing between genes by assuming the same prior distribution among the parameters of samples demonstrating a large degree of similarity ([7]).

In summary, proper statistical modelling and analysis of digital gene expression data requires the development of novel methods, which take into account both the discrete nature of this data and the typically small number (or even the absence) of biological replicates. The development of such methods is particularly urgent due to the huge amount of data being generated by high-throughput sequencing assays. In this paper, we present a method for modelling digital gene expression data that utilizes a novel form of information sharing between genes (based on non-parametric Bayesian clustering) to compensate for the all-too-common problem of low or no replication, which plagues most current analysis methods.

2 Approach

We propose a novel, non-parametric Bayesian approach for the analysis of digital gene expression data. Our point of departure is a hierarchical model for over-dispersed counts. The model is built around the Negative Binomial distribution, which depends, in our formulation, on two parameters: the mean and an over-dispersion parameter. We assume that these parameters are sampled from a Dirichlet process with a joint Inverse Gamma - Normal base distribution, which we have implemented using stick breaking priors. By construction, the model imposes a clustering effect on the data, where all genes in the same cluster are statistically described by a unique Negative Binomial distribution. This can be thought of as a form of information sharing between genes, which permits pooling together data from genes in the same cluster for improved estimation of the mean and over-dispersion parameters, thus bypassing the problem of little or no replication. We develop a blocked Gibbs sampling algorithm for estimating the posterior distributions of the various free parameters in the model. These include the mean and over-dispersion for each gene and the number of clusters (and their occupancies), which does not need to be fixed a priori, as in alternative (parametric) clustering methods. In principle, the proposed method can be applied on various forms of digital gene expression data (including RNA-seq, CAGE, SAGE, Tag-seq, etc.) with little or no replication and it is actually applied on one such example dataset herein.

3 Modelling over-dispersed count data

The digital gene expression data we are considering is arranged in an matrix, where each of the rows corresponds to a different gene and each of the columns corresponds to a different sample. Furthermore, all samples are grouped in different classes (i.e. tissues or experimental conditions). It holds that , where the equality is true if there are no replicas in the data.

Figure 1: Format of digital gene expression data. Rows correspond to genes and columns correspond to samples. Samples are grouped into classes (e.g. tissues or experimental conditions). Each element of the data matrix is a whole number indicating the number of counts or reads corresponding to the gene at the sample. The sum of the reads across all genes in a sample is the depth or exposure of that sample.

We indicate the number of reads for the gene at the sample with the variable . We assume that is Poisson-distributed with a gene- and sample-specific rate parameter . The rate parameter is assumed random itself and it is modelled using a Gamma distribution with shape parameter and scale parameter . The function in the subscript of the shape parameter maps the sample index to an integer indicating the class this sample belongs to. Thus, for a particular gene and class, the shape of the Gamma distribution is the same for all samples. Under this setup, the rate can be integrated (or marginalised) out, which gives rise to the Negative Binomial distribution with parameters and for the number of reads :

(1)

where is the mean of the Negative Binomial distribution and is the variance. Since the variance is always larger than the mean by the quantity , the Negative Binomial distribution can be thought of as a generalisation of the Poisson distribution, which accounts for over-dispersion. Furthermore, we model the mean as , where the offset is the depth or exposure of sample and is, similarly to , a gene- and class-specific parameter. This formulation ensures that is always positive, as it oughts to.

Given the model above, the likelihood of observed reads for the gene in class is written as follows:

(2)

where the index satisfies the condition . By extension, for the gene across all sample classes, the likelihood of observed counts is written as:

(3)

where the class indicator runs across all classes.

3.1 Information sharing between genes

A common feature of digital gene expression data is the small number of biological replicates per class, which makes any attempt to estimate the gene- and class-specific parameters through standard likelihood methods a futile exercise. In order to make robust estimation of these parameters feasible, some form of information sharing between different genes is necessary. In the present context, information sharing between genes means that not all values of are distinct; different genes (or the same gene across different sample classes) may share the same values for these parameters. This idea can be expressed formally by assuming that is random with an infinite mixture of discrete random measures as its prior distribution:

(4)

where indicates a discrete random measure centered at and is the corresponding weight. Conceptually, the fact that the above summation goes to infinity expresses our lack of prior knowledge regarding the number of components that appear in the mixture, other than the obvious restriction that their maximum number cannot be larger than the number of genes times the number of sample classes.

In this formulation, the parameters are sampled from a prior base distribution with hyper-parameters , i.e. . We assume that is distributed according to an inverse Gamma distribution with shape and scale , while follows the Normal distribution with mean and variance . Thus,

is a joint distribution as follows:

(5)

Given the above, can take only positive values, as it oughts to, while can take both positive and negative values.

What makes the mixture in Eq. 4 special is the procedure for generating the infinite sequence of mixing weights. We set and for , where

are random variables following the Beta distribution, i.e.

. This constructive way of sampling new mixing weights resembles a stick-breaking process; generating the first weight corresponds to breaking a stick of length at position ; generating the second weight corresponds to breaking the remaining piece at position and so on. Thus, we write:

(6)

There are various ways for defining the parameters and . Here, we consider only the case where and , with . This parametrisation is equivalent to setting the prior of to a Dirichlet Process with base distribution and concentration parameter . By construction, this procedure leads to a rapidly decreasing sequence of sampled weights, at a rate which depends on . For values of much smaller than , the weights decrease rapidly with increasing , only one or few weights have significant mass and the parameters share a single or a small number of different values . For values of the concentration parameter much larger than , the weights decrease slowly with increasing , many weights have significant mass and the values of tend to be all distinct to each other and distributed according to . Below, we set , which results in a balanced decrease of the weight mass with increasing . In particular, for , decreases (on average) in an unbiased manner with increasing .

Given the above formulation, sampling from its prior distribution is straightforward. First, we introduce an indicator variable , which points to the value of corresponding to the gene in class . We sample such indicator variables for each gene in each class from the Categorical distribution, i.e. , and set . Although is continuous, the distribution of is almost surely discrete and, therefore, its values are not all distinct. Different genes may share the same value of and, thus, all genes are grouped in a finite (unknown) number of clusters, according to the value of they share. Modelling digital gene expression data using this approach is one way to bypass the problem of few (or the absence of) technical replicates, since the data from all genes in the same cluster are pooled together for estimating the parameters that characterise this cluster. The clustering effect described in this section is illustrated in Fig. 2.

Figure 2: The clustering effect that results from imposing a stick-breaking prior on the gene- and class-specific model parameters, . A matrix of indicator variables is used to cluster the observed count data into a finite number of groups, where the genes in each group share the same model parameters. The number of clusters is not known a priori. The distribution of weight mass among the various clusters in the model is determined by parameter .

3.2 Generative model

The description in the previous paragraphs suggests a hierarchical model, which presumably underlies the stochastic generation of the data matrix in Fig. 1. This model is explicitly described below:

(7)

At the bottom of the hierarchy, we identify the measured reads for each gene in each sample, which follow a Negative Binomial distribution with parameters . The parameters of the Negative Binomial distribution are gene- and class-specific and they are completely determined by an also gene- and class-specific indicator variable and the centers of the infinite mixture of point measures in Eq. 4. These centers are distributed according to a joint inverse Gamma and Normal distribution with hyper-parameters , while the indicator variables are sampled from a Categorical distribution with weights . These are, in turn, sampled from a stick-breaking process with concentration parameter . In this model, , , and are latent variables, which are subject to estimation based on the observed data.

4 Inference

At this point, we introduce some further notation. We indicate the matrix of indicator variables with the letter ; lists the centers of the point measures in Eq. 4 and

is the vector of mixing weights.

We are interested in computing the joint posterior density , where is a matrix of count data as in Fig. 1. We approximate the above distribution through numerical (Monte Carlo) methods, i.e. by sampling a large number of

-tuples from it. One way to achieve this is by constructing a Markov chain, which admits

as its stationary distribution. Such a Markov chain can be constructed by using Gibbs sampling, which consists of alternating repeated sampling from the full conditional posteriors , , and . Below, we explain how to sample from each of these conditional distributions.

Sampling from the conditional posterior

In order to sample from the above distribution it is convenient to truncate the infinite mixture in Eq. 4 by rejecting all terms with index larger than and setting , which is equivalent to setting . It has been shown that the error associated with this approximation when is less than or equal to ([8]). For example, for , , and , the error is minimal (less than ). Thus, the truncation should be virtually indistinguishable from the full (infinite) mixture.

Next, we distinguish between active clusters () and inactive clusters (), such that and . Active clusters are those containing at least one gene, while those containing no genes are considered inactive. We write:

Updating the inactive clusters is a simple matter of sampling times from the joint distribution in Eq. 5 given the hyper-parameters . Sampling the active clusters is more complicated and involves sampling each active cluster center individually from its respective posterior, , where is a matrix of measured count data for all genes in the active cluster. Sampling

is done using the Metropolis algorithm with acceptance probability:

(8)

where the superscript indicates a candidate vector of parameters. Each of the two elements ( and ) of this vector is drawn from a symmetric proposal of the following form:

(9)

where the random number is sampled from the standard Normal distribution, i.e. . The prior of is a joint Inverse Gamma - Normal distribution, as shown in Eq. 5, while the likelihood function

is a product of Negative Binomial probability distributions, similar to those in Eqs.

2 and 3.

Sampling from the conditional posterior

Each element of the matrix of indicator variables is sampled from a Categorical distribution with weights , where and:

(10)

In the above expression, is the data for the gene in class , as mentioned in a previous section. Notice that can take any integer value between and and that the weights depend both on the cluster weights and on the value of the likelihood function .

Sampling from the conditional posterior

The mixing weights are generated using a truncated stick-breaking process with . As pointed out in [8], this implies that follows a generalised Dirichlet distribution. Considering the conjugacy between this and the multinomial distribution, the first step in updating is to generate Beta-distributed random numbers:

(11)

for , where is the total number of genes in the cluster. Notice that can be inferred from by simple counting and , where is the total number of genes. is set equal to , in order to ensure that the weights add up to . These are simply generated by setting and , as mentioned in a previous section.

Sampling from the conditional posterior

The hyper-parameters influence indirectly the observations through their effect on the distribution of the active cluster centers, , where and . If we further assume independence between and , we can write .

Assuming active clusters and considering that the prior for is an Inverse Gamma distribution (see Eq. 5), it follows that the posterior is:

(12)

The parameters to are given by the following expressions:

where the initial parameters , , and are all positive. Since sampling from Eq. 12 cannot be done exactly, we employ a Metropolis algorithm with acceptance probability

(13)

where the proposal distribution for sampling new candidate points has the same form as in Eq. 9.

Furthermore, taking advantage of the conjugacy between a Normal likelihood and a Normal-InverseGamma prior, the posterior probability for parameters

and becomes:

(14)

The parameters to (given initial parameters to ) are as follows:

where . Sampling a -pair from the above posterior takes place in two simple steps: first, we sample , where and are shape and scale parameters, respectively. Then, we sample .

4.1 Algorithm

We summarise the algorithm for drawing samples from the posterior below. Notice that indicates the value of at the iteration of the algorithm. is the initial value of .

  1. Set

  2. Set

  3. Set , , ,

  4. Set , the truncation level

  5. Sample from its prior (Eq. 5) conditional on

  6. Set all elements of to the same value, i.e.

  7. Sample from the Categorical distribution with weights

  8. For

    1. Sample given , and the data matrix using a single step of the Metropolis algorithm for each active cluster (see Eq. 8)

    2. Sample from its prior given (see Eq. 5)

    3. Sample given , and the data matrix (see Eq. 10)

    4. Sample given (see Eq. 11)

    5. Sample given and (see Eqs. 12 and 14)

  9. Discard the first samples, which are produced during the burn-in period of the algorithm (i.e. before equilibrium is attained), and work with the remaining samples.

The above procedure implements a form of blocked Gibbs sampling with embedded Metropolis steps for impossible to directly sample from distributions.

5 Results and Discussion

We have implemented the methodology described in the preceding sections in software and we have applied this software on publicly available digital gene expression data (obtained from control and cancerous tissue cultures of neural stem cells; [6]) for evaluation purposes. The data we used in this study can be found at the following URL: http://genomebiology.com/content/supplementary/gb-2010-11-10-r106-s3.tgz. As shown in Table 1, this dataset consists of four libraries from glioblastoma-derived neural stem cells and two from non-cancerous neural stem cells. Each tissue culture was derived from a different subject. Thus, the samples are divided in two classes (cancerous and non-cancerous) with four and two replicates, respectively.

Cancerous Non-cancerous
Genes GliNS1 G144 G166 G179 CB541 CB660
13CDNA73 4 0 6 1 0 5
15E1.2 75 74 222 458 215 167
182-FIP 118 127 555 231 334 114
Table 1: Format of the data by [6]. The first four samples are from glioblastoma neural stem cells, while the last two are from non-cancerous neural stem cells. The dataset contains a total of 18760 genes (i.e. rows).

We implemented the algorithm presented above in the programming language Python, using the libraries NumPy, SciPy and MatplotLib. Calculations were expressed as operations between arrays and the multiprocessing Python module was utilised in order to take full advantage of the parallel architecture of modern multicore processors. The algorithm was run for 200K iterations, which took approximately two days to complete on a 12-core desktop computer. Simulation results were saved to the disk every 50 iterations.

The raw simulation output includes chains of random values of the hyper-parameters , the gene- and class-specific indicators and the active cluster centers , which constitute an approximation to the corresponding posterior distributions given the data matrix . The chains corresponding to the four different components of are illustrated in Figure 3. It may be observed that these reached equilibrium early during the simulation (after less than 20K iterations) and they remained stable for the remaining of the simulation. As explained earlier, these hyper-parameters are important, because they determine the prior distributions of the cluster centers and (hyper-parameters and , respectively) and, subsequently, of the gene- and class-specific parameters and .

Figure 3: Simulation results after 200K iterations. The chains of random samples correspond to the components of the vector of hyper-parameters , i.e. and (panel A) and and (panel B). The former determines the Normal prior distribution of the cluster center parameters , while the latter pair determines the Inverse Gamma prior distribution of the cluster center parameters . The random samples in each chain are approximately sampled (and constitute an approximation of) the corresponding posterior distribution conditional on the data matrix .

It follows from analysis of the chains in Figure 3

that the estimates for these hyper-parameters are (indicating the mean and standard deviation of the estimates):

, , and . The corresponding Inverse Gamma and Normal distributions, which are the priors of the cluster centers and , respectively, are illustrated in Figure 4.

Figure 4: Estimated Inverse Gamma (panel A) and Normal (panel B) prior distributions for the cluster parameters and , respectively. The solid lines indicate mean distributions, i.e. those obtained for the mean values of the hyper-parameters , , and . The dashed lines are distributions obtained by adding or subtracting individually one standard deviation from each relevant hyper-parameter.

A major use of the methodology presented above is that it allows us to estimate the gene- and class-specific parameters and , under the assumption that the same values for these parameters are shared between different genes or even by the same gene among different sample classes. This form of information sharing permits pulling together data from different genes and classes for estimating pairs of and parameters in a robust way, even when only a small number of replicates (or no replicates at all) are available per sample class. As an example, in Figure 5 we illustrate the chains of random samples for and corresponding to the non-cancerous class of samples for the tag with ID 182-FIP (third row in Table 1). These samples constitute approximations of the posterior distributions of the corresponding parameters. Despite the very small number of replicates (), the variance of the random samples is finite. Similar chains were derived for each gene in the dataset, although it should be emphasised that the number of such estimates is smaller than the total number of genes, since more than one genes share the same parameter estimates.

Figure 5: Chains of random samples approximating the posterior distributions of the parameters (panel A) and (panel B) corresponding to the non-cancerous class of samples for the tag with ID 182-FIP (third row in Table 1). These samples were generated after 200K iterations of the algorithm. A similar pair of chains exists for each gene at each sample class (i.e. cancerous and non-cancerous), although not all pairs are distinct to each other due to the clustering effect imposed on the data by the algorithm.

It has already been mentioned that the sharing of and parameter values between different genes can be viewed as a form of clustering (see Figure 2), i.e. there are different groups of genes, where all genes in a particular group share the same and

parameter values. As expected in a Bayesian inference framework, the number of clusters is not constant, but it is itself a random variable, which is characterised by its own posterior distribution and its value fluctuates randomly from one iteration to the next. In Figure 

6, we illustrate the chain of sampled cluster numbers during the course of the simulation (panel A). The first 75K iterations were discarded as burn-in and the remaining samples were used for plotting the histogram in panel B, which approximates the posterior distribution of the number of clusters given the data matrix . It may be observed that the number of clusters fluctuates between 35 and 55 with a peak at around 42 clusters. The algorithm we present above does not make any particular assumptions regarding the number of clusters, apart from the obvious one that this number cannot exceed the number of genes times the number of sample libraries. Although the truncation level sets an artificial limit in the maximum number of clusters, this is never a problem in practise, since the actual estimated number of clusters is typically much smaller that the truncation level (see the y-axis in Figure 6A). The fact that the number of clusters is not decided a priori, but rather inferred along with the other free parameters in the model sets the described methodology in an advantageous position with respect to alternative clustering algorithms, which require deciding the number of clusters at the beginning of the simulation ([9]).

Figure 6: Stochastic evolution of the number of clusters during 200K iterations of the simulation (panel A) and the resulting histogram after discarding the first 75K iterations as burn-in (panel B). After reaching equilibrium, the number of clusters fluctuates around a mean of approximately 43 clusters. In general, the estimated number of clusters is much smaller than the truncation level (, see y-axis in panel A). The histogram in panel B approximates the posterior distribution of the number of clusters given the data matrix .

Similarly to the stochastic fluctuation in the number of clusters, the cluster occupancies (i.e. the number of genes per cluster) is a random vector. In Figure 7, we illustrate the cluster occupancies at two different stages of the simulation, i.e. after 100K and 200K iterations, respectively. We may observe that, with the exception of a single super-cluster (containing more than 6000 genes), cluster occupancies range from between around 3000 and less than 1000 genes.

Figure 7: Cluster occupancies after 100K and 200K iterations of the algorithm. A single super-cluster (including more then 6000 genes) appears at both stages of the simulation. The occupancy of the remaining clusters demonstrates some variability during the course of the simulation, with clusters containing between 3000 and less than 1000 genes.

It should be clarified that each cluster includes many (potentially, hundreds of) genes and it may span several classes. An individual cluster represents a Negative Binomial distribution (with concrete and parameters), which models with high probability the count data from all its member genes. This is illustrated in Figure 8, where we show the histogram of the log of the count data from the first sample (sample GliNS1 in Table 1) along with a subset of the estimated clusters after 200K iterations (gray lines) and the fitted model (red line). It may be observed that each cluster models a subset of the gene expression data in the particular sample. The complete model describing the whole sample is a weighted sum of the individual clusters/Negative Binomial distributions. Formally,

(15)

where is the sample and the index runs over all genes. We repeat that not all pairs are distinct. Also, clusters with larger membership (i.e. including a larger number of genes) have larger weight in determining the overall model.

Figure 8: Histogram of the log of the number of reads from sample GliNS1, a subset of the estimated clusters (gray lines) and the estimated model of the sample at the end of the simulation. Each cluster (gray line) represents a Negative Binomial distribution with specific and parameters, which models a subset of the count data in this particular sample. The complete model (red line) is the weighted sum of all component clusters.

The proposed methodology provides a compact way to model each sample in a digital gene expression dataset following a two-step procedure: first, the dataset is partitioned into a finite number of clusters, where each cluster represents a Negative Binomial distribution (modelling a subset of the data) and the parameters of each such distribution are estimated. Subsequently, each sample in the dataset can be modelled as a weighted sum of Negative Binomial distributions. In Figure 9, we show the log of count data for each sample in the dataset shown in Table 1 along with the fitted models (red lines) after 200K iterations of the algorithm.

Figure 9: Histograms of the log of the number of reads from cancerous (panels Ai-iv) and non-cancerous (panels Bi,ii) samples and the respective estimated models after 200K iterations of the algorithm. As already mentioned, each red line is the weighted sum of many component Negative Binomial distributions / clusters, which model different subsets of each data sample. We may observe that the estimated models fit tightly the corresponding data samples.

6 Conclusion

Next-generation sequencing technologies are routinely being used for generating huge volumes of gene expression data in a relatively short time. This data is fundamentally discrete in nature and their analysis requires the development of novel statistical methods, rather than modifying existing tests that were originally aimed at the analysis of microarrays. The development of such methods is an active area of research and several papers have been published on the subject (see [15] and [10] for an overview).

In this paper, we present a novel approach for modelling over-dispersed count data of gene expression (i.e. data with variance larger than the mean predicted by the Poisson distribution) using a hierarchical model based on the Negative Binomial distribution. The novel aspect of our approach is the use of a Dirichlet process in the form of stick breaking priors for modelling the parameters (mean and over-dispersion) of the Negative Binomial distribution. By construction, this formulation forces clustering of the count data, where genes in the same cluster are sampled from the same Negative Binomial distribution, with a common pair of mean and over-dispersion parameters. Through this elegant form of information sharing between genes, we compensate for the problem of little or no replication, which often restricts the analysis of digital gene expression datasets. We have demonstrated the ability of this approach to model accurately actual biological data by applying the proposed methodology on a publicly available dataset obtained from cancerous and non-cancerous cultured neural stem cells ([6]).

We show that inference is achieved in the proposed model through the application of a blocked Gibbs sampler, which includes estimating, among others, the gene- and class-specific mean and over-dispersion of the Negative Binomial distribution. Similarly, the number of clusters and their occupancies are inferred along with the rest free parameters in the model.

Currently, the software implementing the proposed method remains relatively computationally expensive. In particular, 200K iterations require approximately two days to complete on a 12-core desktop computer. This time scale is not disproportionate to the production time of experimental data and it is mainly due to the high volume of the tested data ( genes per sample) and the need to obtain long chains of samples for a more accurate estimation of posterior distributions. Long execution times are a characteristic, more generally, of all Monte Carlo approximation methods. Our implementation of the algorithm is completely parallelised and calculations are expressed as operations between vectors in order to take full advantage of modern multi-core computers. Ongoing work towards reducing execution times aims at the application of variational inference methods ([3]), instead of the blocked Gibbs sampler we currently use. The algorithm can be further improved by avoiding truncation of the infinite summation described in Equation 4, as described in [16] and in [23].

This non-parametric Bayesian approach for modelling count data has thus shown great promise in handling over-dispersion and the all-too-common problem of low replication, both in theoretical evaluation and on the example dataset. The software that has been produced will be of great utility for the study of digital gene expression data and the statistical theory will contribute to leading the development of non-parametric methods in general for all forms of modelling count data of gene expression.

Acknowledgement

The authors would like to thank Prof. Peter Green and Dr. Richard Goldstein for useful discussions. Also, we would like to thank P. G. Engstrom and colleagues for producing the public data we used in this paper.

Funding:

This work was supported by grants EPSRC EP/H032436/1 and BBSRC G022771/1.

References

  • [1] Simon Anders and Wolfgang Huber. Differential expression analysis for sequence count data. Genome Biol, 11(10):R106, 2010.
  • [2] P. L. Auer and R. W. Doerge. A Two-Stage Poisson Model for Testing RNA-Seq Data. Statistical Applications in Genetics and Molecular Biology, 10(1):26, 2011.
  • [3] David M Blei and Michael I Jordan. Variational inference for dirichlet process mixtures. Bayesian Analysis, 1(1):121–144, 2006.
  • [4] Piero Carninci. Is sequencing enlightenment ending the dark age of the transcriptome? Nat Methods, 6(10):711–13, Oct 2009.
  • [5] Nicole Cloonan, Alistair R R Forrest, Gabriel Kolle, Brooke B A Gardiner, Geoffrey J Faulkner, Mellissa K Brown, Darrin F Taylor, Anita L Steptoe, Shivangi Wani, Graeme Bethel, Alan J Robertson, Andrew C Perkins, Stephen J Bruce, Clarence C Lee, Swati S Ranade, Heather E Peckham, Jonathan M Manning, Kevin J McKernan, and Sean M Grimmond. Stem cell transcriptome profiling via massive-scale mrna sequencing. Nat Methods, 5(7):613–9, Jul 2008.
  • [6] Pär G Engström, Diva Tommei, Stefan H Stricker, Christine Ender, Steven M Pollard, and Paul Bertone. Digital transcriptome profiling of normal and glioblastoma-derived neural stem cells identifies genes associated with patient survival. Genome Med, 4(10):76, Oct 2012.
  • [7] Thomas J Hardcastle and Krystyna A Kelly. bayseq: empirical bayesian methods for identifying differential expression in sequence count data. BMC Bioinformatics, 11:422, 2010.
  • [8] Hemant Ishwaran and Lancelot F. James. Gibbs Sampling Methods for Stick-Breaking Priors. Journal of the American Statistical Association, 96(453):161–173, 2001.
  • [9] Daxin Jiang, Chun Tang, and Aidong Zhang. Cluster analysis for gene expression data: A survey. IEEE Trans. Knowl. Data Eng., 16(11):1370–1386, 2004.
  • [10] Vanessa M Kvam, Peng Liu, and Yaqing Si. A comparison of statistical methods for detecting differentially expressed genes from rna-seq data. Am J Bot, 99(2):248–56, Feb 2012.
  • [11] Ben Langmead, Kasper D Hansen, and Jeffrey T Leek. Cloud-scale rna-sequencing differential expression analysis with myrna. Genome Biol, 11(8):R83, 2010.
  • [12] Jun Lu, John K Tomfohr, and Thomas B Kepler. Identifying differential expression in multiple sage libraries: an overdispersed log-linear model approach. BMC Bioinformatics, 6:165, 2005.
  • [13] Davis J McCarthy, Yunshun Chen, and Gordon K Smyth. Differential expression analysis of multifactor rna-seq experiments with respect to biological variation. Nucleic Acids Res, 40(10):4288–97, May 2012.
  • [14] Ugrappa Nagalakshmi, Zhong Wang, Karl Waern, Chong Shou, Debasish Raha, Mark Gerstein, and Michael Snyder. The transcriptional landscape of the yeast genome defined by rna sequencing. Science, 320(5881):1344–9, Jun 2008.
  • [15] Alicia Oshlack, Mark D Robinson, and Matthew D Young. From rna-seq reads to differential expression results. Genome Biol, 11(12):220, 2010.
  • [16] O Papaspiliopoulos and G O Roberts. Retrospective mcmc for dirichlet process hierarchical models. Biometrika, 95:169–186, 2008.
  • [17] Mark D Robinson and Gordon K Smyth. Moderated statistical tests for assessing differences in tag abundance. Bioinformatics, 23(21):2881–7, Nov 2007.
  • [18] Mark D Robinson and Gordon K Smyth. Small-sample estimation of negative binomial dispersion, with applications to sage data. Biostatistics, 9(2):321–32, Apr 2008.
  • [19] Toshiyuki Shiraki, Shinji Kondo, Shintaro Katayama, Kazunori Waki, Takeya Kasukawa, Hideya Kawaji, Rimantas Kodzius, Akira Watahiki, Mari Nakamura, Takahiro Arakawa, Shiro Fukuda, Daisuke Sasaki, Anna Podhajska, Matthias Harbers, Jun Kawai, Piero Carninci, and Yoshihide Hayashizaki. Cap analysis gene expression for high-throughput analysis of transcriptional starting point and identification of promoter usage. Proc Natl Acad Sci U S A, 100(26):15776–81, Dec 2003.
  • [20] Sudeep Srivastava and Liang Chen. A two-parameter generalized poisson model to improve the analysis of rna-seq data. Nucleic Acids Res, 38(17):e170, Sep 2010.
  • [21] Peter A C ’t Hoen, Yavuz Ariyurek, Helene H Thygesen, Erno Vreugdenhil, Rolf H A M Vossen, Renée X de Menezes, Judith M Boer, Gert-Jan B van Ommen, and Johan T den Dunnen. Deep sequencing-based expression analysis shows major advances in robustness, resolution and inter-lab portability over five microarray platforms. Nucleic Acids Res, 36(21):e141, Dec 2008.
  • [22] V E Velculescu, L Zhang, B Vogelstein, and K W Kinzler. Serial analysis of gene expression. Science, 270(5235):484–7, Oct 1995.
  • [23] S Walker. Sampling the dirichlet mixture model with slices. Comm Statist Sim Comput, 36:45–54, 2007.
  • [24] Likun Wang, Zhixing Feng, Xi Wang, Xiaowo Wang, and Xuegong Zhang. Degseq: an r package for identifying differentially expressed genes from rna-seq data. Bioinformatics, 26(1):136–8, Jan 2010.
  • [25] Zhong Wang, Mark Gerstein, and Michael Snyder. Rna-seq: a revolutionary tool for transcriptomics. Nat Rev Genet, 10(1):57–63, Jan 2009.