dgeclust
Hierarchical Non-Parametric Bayesian Clustering of Digital Expression Data
view repo
Next-generation sequencing technologies provide a revolutionary tool for generating gene expression data. Starting with a fixed RNA sample, they construct a library of millions of differentially abundant short sequence tags or "reads", which constitute a fundamentally discrete measure of the level of gene expression. A common limitation in experiments using these technologies is the low number or even absence of biological replicates, which complicates the statistical analysis of digital gene expression data. Analysis of this type of data has often been based on modified tests originally devised for analysing microarrays; both these and even de novo methods for the analysis of RNA-seq data are plagued by the common problem of low replication. We propose a novel, non-parametric Bayesian approach for the analysis of digital gene expression data. We begin with a hierarchical model for modelling over-dispersed count data and a blocked Gibbs sampling algorithm for inferring the posterior distribution of model parameters conditional on these counts. The algorithm compensates for the problem of low numbers of biological replicates by clustering together genes with tag counts that are likely sampled from a common distribution and using this augmented sample for estimating the parameters of this distribution. The number of clusters is not decided a priori, but it is inferred along with the remaining model parameters. We demonstrate the ability of this approach to model biological data with high fidelity by applying the algorithm on a public dataset obtained from cancerous and non-cancerous neural tissues.
READ FULL TEXT VIEW PDF
Single-cell RNA-seq data are challenging because of the sparseness of th...
read it
Due to rapid advancement in high-throughput techniques, such as microarr...
read it
MicroRNAs (miRNAs) are small RNA molecules composed of 19-22 nt, which p...
read it
A nonparametric Bayesian extension of Factor Analysis (FA) is proposed w...
read it
Animal-vehicle collisions (AVCs) are common around the world and result ...
read it
Bakground: With the proliferation of available microarray and high throu...
read it
Next-generation sequencing (NGS) to profile temporal changes in living
s...
read it
Hierarchical Non-Parametric Bayesian Clustering of Digital Expression Data
It is a common truth that our knowledge in Molecular Biology is only as good as the tools we have at our disposal. Next-generation or high-throughput sequencing technologies provide a revolutionary tool in the aid of genomic studies by allowing the generation, in a relatively short time, of millions of short sequence tags, which reflect particular aspects of the molecular state of a biological system. A common application of these technologies is the study of the transcriptome, which involves a family of methodologies, including RNA-seq ([25]), CAGE (Cap Analysis of Gene Expression; [19]) and SAGE (Serial Analysis of Gene Expression; [22]). When compared to microarrays, this class of methodologies offers several advantages, including detection of a wider level of expression levels and independence on prior knowledge of the biological system, which is required by hybridisation-based technologies, such as microarrays.
Typically, an experiment in this category starts with the extraction of a snapshot RNA sample from the biological system of interest and its shearing in a large number of fragments of varying lengths. The population of these fragments is then reversed-transcribed to a cDNA library and sequenced on a high-throughput platform, generating large numbers of short DNA sequences known as “reads”. The ensuing analysis pipeline starts with mapping or aligning these reads on a reference genome. At the next stage, the mapped reads are summarised into gene-, exon- or transcript-level counts, normalised and further analysed for detecting differential gene expression (see [15] for a review).
It is important to realize that the normalised read (or tag) count data generated from this family of methodologies represents the number of times a particular class of cDNA fragments has been sequenced, which is directly related to their abundance in the library and, in turn, the abundance of the associated transcripts in the original sample. Thus, this count data is essentially a discrete or digital measure of gene expression, which is fundamentally different in nature (and, in general terms, superior in quality) from the continuous fluorescence intensity measurements obtained from the application of microarray technologies. Due to their better quality, next-generation sequence assays tend to replace microarray-based technologies, despite their higher cost ([4]).
One approach for the analysis of count data of gene expression is to transform the counts to approximate normality and then apply existing methods aimed at the analysis of microarrays (see for example, [21, 5]). However, as noted in [13]
, this approach may fail in the case of very small counts (which are far from normally distributed) and also due to the strong mean-variance relationship of count data, which is not taken into account by tests based on a normality assumption. Proper statistical modelling and analysis of count data of gene expression requires novel approaches, rather than adaptation of existing methodologies, which aimed from the beginning at processing continuous input.
Formally, the generation of count data using next-generation sequencing assays can be thought of as random sampling of an underlying population of cDNA fragments. Thus, the counts for each tag describing a class of cDNA fragments can, in principle, be modelled using the Poisson distribution, whose variance is, by definition, equal to its mean. However, it has been shown that, in real count data of gene expression, the variance can be larger that what is predicted by the Poisson distribution (
[12, 17, 18, 14]). An approach that accounts for the so-called “over-dispersion” in the data is to adopt quasi-likelihood methods, which augment the variance of the Poisson distribution with a scaling factor, thus by-passing the assumption of equality between the mean and variance ([2, 20, 24, 11]). An alternative approach is to use the Negative Binomial distribution, which is derived from the Poisson, assuming a Gamma-distributed rate parameter. The Negative Binomial distribution incorporates both a mean and a variance parameter, thus modelling over-dispersion in a natural way (
[1, 7, 13]). An overview of existing methods for the analysis of gene expression count data can be found in [15] and [10]Despite the decreasing cost of next-generation sequencing assays (and also due to technical and ethical restrictions), digital datasets of gene expression are often characterised by a small number of biological replicates or no replicates at all. Although this complicates any effort to statistically analyse the data, it has led to inventive attempts at estimating as accurately as possible the biological variability in the data given very small samples. One approach is to assume a locally linear relationship between the variance and the mean in the Negative Binomial distribution, which allows estimating the variance by pooling together data from genes with similar expression levels ([1]). Alternatively, one can make the rather restrictive assumption that all genes share the same variance, in which case the over-dispersion parameter in the Negative Binomial distribution can be estimated from a very large set of datapoints ([17]). A further elaboration of this approach is to assume a unique variance per gene and adopt a weighted-likelihood methodology for sharing information between genes, which allows for an improved estimation of the gene-specific over-dispersion parameters ([13]). Another yet distinct empirical Bayes approach is implemented in the software baySeq, which adopts a form of information sharing between genes by assuming the same prior distribution among the parameters of samples demonstrating a large degree of similarity ([7]).
In summary, proper statistical modelling and analysis of digital gene expression data requires the development of novel methods, which take into account both the discrete nature of this data and the typically small number (or even the absence) of biological replicates. The development of such methods is particularly urgent due to the huge amount of data being generated by high-throughput sequencing assays. In this paper, we present a method for modelling digital gene expression data that utilizes a novel form of information sharing between genes (based on non-parametric Bayesian clustering) to compensate for the all-too-common problem of low or no replication, which plagues most current analysis methods.
We propose a novel, non-parametric Bayesian approach for the analysis of digital gene expression data. Our point of departure is a hierarchical model for over-dispersed counts. The model is built around the Negative Binomial distribution, which depends, in our formulation, on two parameters: the mean and an over-dispersion parameter. We assume that these parameters are sampled from a Dirichlet process with a joint Inverse Gamma - Normal base distribution, which we have implemented using stick breaking priors. By construction, the model imposes a clustering effect on the data, where all genes in the same cluster are statistically described by a unique Negative Binomial distribution. This can be thought of as a form of information sharing between genes, which permits pooling together data from genes in the same cluster for improved estimation of the mean and over-dispersion parameters, thus bypassing the problem of little or no replication. We develop a blocked Gibbs sampling algorithm for estimating the posterior distributions of the various free parameters in the model. These include the mean and over-dispersion for each gene and the number of clusters (and their occupancies), which does not need to be fixed a priori, as in alternative (parametric) clustering methods. In principle, the proposed method can be applied on various forms of digital gene expression data (including RNA-seq, CAGE, SAGE, Tag-seq, etc.) with little or no replication and it is actually applied on one such example dataset herein.
The digital gene expression data we are considering is arranged in an matrix, where each of the rows corresponds to a different gene and each of the columns corresponds to a different sample. Furthermore, all samples are grouped in different classes (i.e. tissues or experimental conditions). It holds that , where the equality is true if there are no replicas in the data.
We indicate the number of reads for the gene at the sample with the variable . We assume that is Poisson-distributed with a gene- and sample-specific rate parameter . The rate parameter is assumed random itself and it is modelled using a Gamma distribution with shape parameter and scale parameter . The function in the subscript of the shape parameter maps the sample index to an integer indicating the class this sample belongs to. Thus, for a particular gene and class, the shape of the Gamma distribution is the same for all samples. Under this setup, the rate can be integrated (or marginalised) out, which gives rise to the Negative Binomial distribution with parameters and for the number of reads :
(1) |
where is the mean of the Negative Binomial distribution and is the variance. Since the variance is always larger than the mean by the quantity , the Negative Binomial distribution can be thought of as a generalisation of the Poisson distribution, which accounts for over-dispersion. Furthermore, we model the mean as , where the offset is the depth or exposure of sample and is, similarly to , a gene- and class-specific parameter. This formulation ensures that is always positive, as it oughts to.
Given the model above, the likelihood of observed reads for the gene in class is written as follows:
(2) | |||||
where the index satisfies the condition . By extension, for the gene across all sample classes, the likelihood of observed counts is written as:
(3) |
where the class indicator runs across all classes.
A common feature of digital gene expression data is the small number of biological replicates per class, which makes any attempt to estimate the gene- and class-specific parameters through standard likelihood methods a futile exercise. In order to make robust estimation of these parameters feasible, some form of information sharing between different genes is necessary. In the present context, information sharing between genes means that not all values of are distinct; different genes (or the same gene across different sample classes) may share the same values for these parameters. This idea can be expressed formally by assuming that is random with an infinite mixture of discrete random measures as its prior distribution:
(4) |
where indicates a discrete random measure centered at and is the corresponding weight. Conceptually, the fact that the above summation goes to infinity expresses our lack of prior knowledge regarding the number of components that appear in the mixture, other than the obvious restriction that their maximum number cannot be larger than the number of genes times the number of sample classes.
In this formulation, the parameters are sampled from a prior base distribution with hyper-parameters , i.e. . We assume that is distributed according to an inverse Gamma distribution with shape and scale , while follows the Normal distribution with mean and variance . Thus,
is a joint distribution as follows:
(5) |
Given the above, can take only positive values, as it oughts to, while can take both positive and negative values.
What makes the mixture in Eq. 4 special is the procedure for generating the infinite sequence of mixing weights. We set and for , where
are random variables following the Beta distribution, i.e.
. This constructive way of sampling new mixing weights resembles a stick-breaking process; generating the first weight corresponds to breaking a stick of length at position ; generating the second weight corresponds to breaking the remaining piece at position and so on. Thus, we write:(6) |
There are various ways for defining the parameters and . Here, we consider only the case where and , with . This parametrisation is equivalent to setting the prior of to a Dirichlet Process with base distribution and concentration parameter . By construction, this procedure leads to a rapidly decreasing sequence of sampled weights, at a rate which depends on . For values of much smaller than , the weights decrease rapidly with increasing , only one or few weights have significant mass and the parameters share a single or a small number of different values . For values of the concentration parameter much larger than , the weights decrease slowly with increasing , many weights have significant mass and the values of tend to be all distinct to each other and distributed according to . Below, we set , which results in a balanced decrease of the weight mass with increasing . In particular, for , decreases (on average) in an unbiased manner with increasing .
Given the above formulation, sampling from its prior distribution is straightforward. First, we introduce an indicator variable , which points to the value of corresponding to the gene in class . We sample such indicator variables for each gene in each class from the Categorical distribution, i.e. , and set . Although is continuous, the distribution of is almost surely discrete and, therefore, its values are not all distinct. Different genes may share the same value of and, thus, all genes are grouped in a finite (unknown) number of clusters, according to the value of they share. Modelling digital gene expression data using this approach is one way to bypass the problem of few (or the absence of) technical replicates, since the data from all genes in the same cluster are pooled together for estimating the parameters that characterise this cluster. The clustering effect described in this section is illustrated in Fig. 2.
The description in the previous paragraphs suggests a hierarchical model, which presumably underlies the stochastic generation of the data matrix in Fig. 1. This model is explicitly described below:
(7) |
At the bottom of the hierarchy, we identify the measured reads for each gene in each sample, which follow a Negative Binomial distribution with parameters . The parameters of the Negative Binomial distribution are gene- and class-specific and they are completely determined by an also gene- and class-specific indicator variable and the centers of the infinite mixture of point measures in Eq. 4. These centers are distributed according to a joint inverse Gamma and Normal distribution with hyper-parameters , while the indicator variables are sampled from a Categorical distribution with weights . These are, in turn, sampled from a stick-breaking process with concentration parameter . In this model, , , and are latent variables, which are subject to estimation based on the observed data.
At this point, we introduce some further notation. We indicate the matrix of indicator variables with the letter ; lists the centers of the point measures in Eq. 4 and
is the vector of mixing weights.
We are interested in computing the joint posterior density , where is a matrix of count data as in Fig. 1. We approximate the above distribution through numerical (Monte Carlo) methods, i.e. by sampling a large number of
-tuples from it. One way to achieve this is by constructing a Markov chain, which admits
as its stationary distribution. Such a Markov chain can be constructed by using Gibbs sampling, which consists of alternating repeated sampling from the full conditional posteriors , , and . Below, we explain how to sample from each of these conditional distributions.In order to sample from the above distribution it is convenient to truncate the infinite mixture in Eq. 4 by rejecting all terms with index larger than and setting , which is equivalent to setting . It has been shown that the error associated with this approximation when is less than or equal to ([8]). For example, for , , and , the error is minimal (less than ). Thus, the truncation should be virtually indistinguishable from the full (infinite) mixture.
Next, we distinguish between active clusters () and inactive clusters (), such that and . Active clusters are those containing at least one gene, while those containing no genes are considered inactive. We write:
Updating the inactive clusters is a simple matter of sampling times from the joint distribution in Eq. 5 given the hyper-parameters . Sampling the active clusters is more complicated and involves sampling each active cluster center individually from its respective posterior, , where is a matrix of measured count data for all genes in the active cluster. Sampling
is done using the Metropolis algorithm with acceptance probability:
(8) |
where the superscript indicates a candidate vector of parameters. Each of the two elements ( and ) of this vector is drawn from a symmetric proposal of the following form:
(9) |
where the random number is sampled from the standard Normal distribution, i.e. . The prior of is a joint Inverse Gamma - Normal distribution, as shown in Eq. 5, while the likelihood function
is a product of Negative Binomial probability distributions, similar to those in Eqs.
2 and 3.Each element of the matrix of indicator variables is sampled from a Categorical distribution with weights , where and:
(10) |
In the above expression, is the data for the gene in class , as mentioned in a previous section. Notice that can take any integer value between and and that the weights depend both on the cluster weights and on the value of the likelihood function .
The mixing weights are generated using a truncated stick-breaking process with . As pointed out in [8], this implies that follows a generalised Dirichlet distribution. Considering the conjugacy between this and the multinomial distribution, the first step in updating is to generate Beta-distributed random numbers:
(11) |
for , where is the total number of genes in the cluster. Notice that can be inferred from by simple counting and , where is the total number of genes. is set equal to , in order to ensure that the weights add up to . These are simply generated by setting and , as mentioned in a previous section.
The hyper-parameters influence indirectly the observations through their effect on the distribution of the active cluster centers, , where and . If we further assume independence between and , we can write .
Assuming active clusters and considering that the prior for is an Inverse Gamma distribution (see Eq. 5), it follows that the posterior is:
(12) |
The parameters to are given by the following expressions:
where the initial parameters , , and are all positive. Since sampling from Eq. 12 cannot be done exactly, we employ a Metropolis algorithm with acceptance probability
(13) |
where the proposal distribution for sampling new candidate points has the same form as in Eq. 9.
Furthermore, taking advantage of the conjugacy between a Normal likelihood and a Normal-InverseGamma prior, the posterior probability for parameters
and becomes:(14) |
The parameters to (given initial parameters to ) are as follows:
where . Sampling a -pair from the above posterior takes place in two simple steps: first, we sample , where and are shape and scale parameters, respectively. Then, we sample .
We summarise the algorithm for drawing samples from the posterior below. Notice that indicates the value of at the iteration of the algorithm. is the initial value of .
Set
Set
Set , , ,
Set , the truncation level
Sample from its prior (Eq. 5) conditional on
Set all elements of to the same value, i.e.
Sample from the Categorical distribution with weights
For
Discard the first samples, which are produced during the burn-in period of the algorithm (i.e. before equilibrium is attained), and work with the remaining samples.
The above procedure implements a form of blocked Gibbs sampling with embedded Metropolis steps for impossible to directly sample from distributions.
We have implemented the methodology described in the preceding sections in software and we have applied this software on publicly available digital gene expression data (obtained from control and cancerous tissue cultures of neural stem cells; [6]) for evaluation purposes. The data we used in this study can be found at the following URL: http://genomebiology.com/content/supplementary/gb-2010-11-10-r106-s3.tgz. As shown in Table 1, this dataset consists of four libraries from glioblastoma-derived neural stem cells and two from non-cancerous neural stem cells. Each tissue culture was derived from a different subject. Thus, the samples are divided in two classes (cancerous and non-cancerous) with four and two replicates, respectively.
Cancerous | Non-cancerous | |||||
Genes | GliNS1 | G144 | G166 | G179 | CB541 | CB660 |
13CDNA73 | 4 | 0 | 6 | 1 | 0 | 5 |
15E1.2 | 75 | 74 | 222 | 458 | 215 | 167 |
182-FIP | 118 | 127 | 555 | 231 | 334 | 114 |
⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ | ⋮ |
We implemented the algorithm presented above in the programming language Python, using the libraries NumPy, SciPy and MatplotLib. Calculations were expressed as operations between arrays and the multiprocessing Python module was utilised in order to take full advantage of the parallel architecture of modern multicore processors. The algorithm was run for 200K iterations, which took approximately two days to complete on a 12-core desktop computer. Simulation results were saved to the disk every 50 iterations.
The raw simulation output includes chains of random values of the hyper-parameters , the gene- and class-specific indicators and the active cluster centers , which constitute an approximation to the corresponding posterior distributions given the data matrix . The chains corresponding to the four different components of are illustrated in Figure 3. It may be observed that these reached equilibrium early during the simulation (after less than 20K iterations) and they remained stable for the remaining of the simulation. As explained earlier, these hyper-parameters are important, because they determine the prior distributions of the cluster centers and (hyper-parameters and , respectively) and, subsequently, of the gene- and class-specific parameters and .
It follows from analysis of the chains in Figure 3
that the estimates for these hyper-parameters are (indicating the mean and standard deviation of the estimates):
, , and . The corresponding Inverse Gamma and Normal distributions, which are the priors of the cluster centers and , respectively, are illustrated in Figure 4.A major use of the methodology presented above is that it allows us to estimate the gene- and class-specific parameters and , under the assumption that the same values for these parameters are shared between different genes or even by the same gene among different sample classes. This form of information sharing permits pulling together data from different genes and classes for estimating pairs of and parameters in a robust way, even when only a small number of replicates (or no replicates at all) are available per sample class. As an example, in Figure 5 we illustrate the chains of random samples for and corresponding to the non-cancerous class of samples for the tag with ID 182-FIP (third row in Table 1). These samples constitute approximations of the posterior distributions of the corresponding parameters. Despite the very small number of replicates (), the variance of the random samples is finite. Similar chains were derived for each gene in the dataset, although it should be emphasised that the number of such estimates is smaller than the total number of genes, since more than one genes share the same parameter estimates.
It has already been mentioned that the sharing of and parameter values between different genes can be viewed as a form of clustering (see Figure 2), i.e. there are different groups of genes, where all genes in a particular group share the same and
parameter values. As expected in a Bayesian inference framework, the number of clusters is not constant, but it is itself a random variable, which is characterised by its own posterior distribution and its value fluctuates randomly from one iteration to the next. In Figure
6, we illustrate the chain of sampled cluster numbers during the course of the simulation (panel A). The first 75K iterations were discarded as burn-in and the remaining samples were used for plotting the histogram in panel B, which approximates the posterior distribution of the number of clusters given the data matrix . It may be observed that the number of clusters fluctuates between 35 and 55 with a peak at around 42 clusters. The algorithm we present above does not make any particular assumptions regarding the number of clusters, apart from the obvious one that this number cannot exceed the number of genes times the number of sample libraries. Although the truncation level sets an artificial limit in the maximum number of clusters, this is never a problem in practise, since the actual estimated number of clusters is typically much smaller that the truncation level (see the y-axis in Figure 6A). The fact that the number of clusters is not decided a priori, but rather inferred along with the other free parameters in the model sets the described methodology in an advantageous position with respect to alternative clustering algorithms, which require deciding the number of clusters at the beginning of the simulation ([9]).Similarly to the stochastic fluctuation in the number of clusters, the cluster occupancies (i.e. the number of genes per cluster) is a random vector. In Figure 7, we illustrate the cluster occupancies at two different stages of the simulation, i.e. after 100K and 200K iterations, respectively. We may observe that, with the exception of a single super-cluster (containing more than 6000 genes), cluster occupancies range from between around 3000 and less than 1000 genes.
It should be clarified that each cluster includes many (potentially, hundreds of) genes and it may span several classes. An individual cluster represents a Negative Binomial distribution (with concrete and parameters), which models with high probability the count data from all its member genes. This is illustrated in Figure 8, where we show the histogram of the log of the count data from the first sample (sample GliNS1 in Table 1) along with a subset of the estimated clusters after 200K iterations (gray lines) and the fitted model (red line). It may be observed that each cluster models a subset of the gene expression data in the particular sample. The complete model describing the whole sample is a weighted sum of the individual clusters/Negative Binomial distributions. Formally,
(15) |
where is the sample and the index runs over all genes. We repeat that not all pairs are distinct. Also, clusters with larger membership (i.e. including a larger number of genes) have larger weight in determining the overall model.
The proposed methodology provides a compact way to model each sample in a digital gene expression dataset following a two-step procedure: first, the dataset is partitioned into a finite number of clusters, where each cluster represents a Negative Binomial distribution (modelling a subset of the data) and the parameters of each such distribution are estimated. Subsequently, each sample in the dataset can be modelled as a weighted sum of Negative Binomial distributions. In Figure 9, we show the log of count data for each sample in the dataset shown in Table 1 along with the fitted models (red lines) after 200K iterations of the algorithm.
Next-generation sequencing technologies are routinely being used for generating huge volumes of gene expression data in a relatively short time. This data is fundamentally discrete in nature and their analysis requires the development of novel statistical methods, rather than modifying existing tests that were originally aimed at the analysis of microarrays. The development of such methods is an active area of research and several papers have been published on the subject (see [15] and [10] for an overview).
In this paper, we present a novel approach for modelling over-dispersed count data of gene expression (i.e. data with variance larger than the mean predicted by the Poisson distribution) using a hierarchical model based on the Negative Binomial distribution. The novel aspect of our approach is the use of a Dirichlet process in the form of stick breaking priors for modelling the parameters (mean and over-dispersion) of the Negative Binomial distribution. By construction, this formulation forces clustering of the count data, where genes in the same cluster are sampled from the same Negative Binomial distribution, with a common pair of mean and over-dispersion parameters. Through this elegant form of information sharing between genes, we compensate for the problem of little or no replication, which often restricts the analysis of digital gene expression datasets. We have demonstrated the ability of this approach to model accurately actual biological data by applying the proposed methodology on a publicly available dataset obtained from cancerous and non-cancerous cultured neural stem cells ([6]).
We show that inference is achieved in the proposed model through the application of a blocked Gibbs sampler, which includes estimating, among others, the gene- and class-specific mean and over-dispersion of the Negative Binomial distribution. Similarly, the number of clusters and their occupancies are inferred along with the rest free parameters in the model.
Currently, the software implementing the proposed method remains relatively computationally expensive. In particular, 200K iterations require approximately two days to complete on a 12-core desktop computer. This time scale is not disproportionate to the production time of experimental data and it is mainly due to the high volume of the tested data ( genes per sample) and the need to obtain long chains of samples for a more accurate estimation of posterior distributions. Long execution times are a characteristic, more generally, of all Monte Carlo approximation methods. Our implementation of the algorithm is completely parallelised and calculations are expressed as operations between vectors in order to take full advantage of modern multi-core computers. Ongoing work towards reducing execution times aims at the application of variational inference methods ([3]), instead of the blocked Gibbs sampler we currently use. The algorithm can be further improved by avoiding truncation of the infinite summation described in Equation 4, as described in [16] and in [23].
This non-parametric Bayesian approach for modelling count data has thus shown great promise in handling over-dispersion and the all-too-common problem of low replication, both in theoretical evaluation and on the example dataset. The software that has been produced will be of great utility for the study of digital gene expression data and the statistical theory will contribute to leading the development of non-parametric methods in general for all forms of modelling count data of gene expression.
The authors would like to thank Prof. Peter Green and Dr. Richard Goldstein for useful discussions. Also, we would like to thank P. G. Engstrom and colleagues for producing the public data we used in this paper.
This work was supported by grants EPSRC EP/H032436/1 and BBSRC G022771/1.
Comments
There are no comments yet.