Modelling-based experiment retrieval: A case study with gene expression clustering

05/19/2015
by   Paul Blomstedt, et al.
0

Motivation: Public and private repositories of experimental data are growing to sizes that require dedicated methods for finding relevant data. To improve on the state of the art of keyword searches from annotations, methods for content-based retrieval have been proposed. In the context of gene expression experiments, most methods retrieve gene expression profiles, requiring each experiment to be expressed as a single profile, typically of case vs. control. A more general, recently suggested alternative is to retrieve experiments whose models are good for modelling the query dataset. However, for very noisy and high-dimensional query data, this retrieval criterion turns out to be very noisy as well. Results: We propose doing retrieval using a denoised model of the query dataset, instead of the original noisy dataset itself. To this end, we introduce a general probabilistic framework, where each experiment is modelled separately and the retrieval is done by finding related models. For retrieval of gene expression experiments, we use a probabilistic model called product partition model, which induces a clustering of genes that show similar expression patterns across a number of samples. The suggested metric for retrieval using clusterings is the normalized information distance. Empirical results finally suggest that inference for the full probabilistic model can be approximated with good performance using computationally faster heuristic clustering approaches (e.g. k-means). The method is highly scalable and straightforward to apply to construct a general-purpose gene expression experiment retrieval method. Availability: The method can be implemented using standard clustering algorithms and normalized information distance, available in many statistical software packages.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

01/06/2021

Classification of chemical compounds based on the correlation between in vitro gene expression profiles

Toxicity evaluation of chemical compounds has traditionally relied on an...
12/01/2021

Functional regression clustering with multiple functional gene expressions

Gene expression data is often collected in time series experiments, unde...
02/19/2014

Retrieval of Experiments by Efficient Estimation of Marginal Likelihood

We study the task of retrieving relevant experiments given a query exper...
09/05/2019

Reply to "Issues arising from benchmarking single-cell RNA sequencing imputation methods"

In our Brief Communication (DOI: 10.1038/s41592-018-0033-z), we presente...
01/08/2013

An Analysis of Gene Expression Data using Penalized Fuzzy C-Means Approach

With the rapid advances of microarray technologies, large amounts of hig...
07/12/2019

Predicting phenotypes from microarrays using amplified, initially marginal, eigenvector regression

Motivation: The discovery of relationships between gene expression measu...
10/08/2013

Retrieval of Experiments with Sequential Dirichlet Process Mixtures in Model Space

We address the problem of retrieving relevant experiments given a query ...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

As the use of high-throughput molecular measurement technologies continues to spread, an ever increasing amount of data from biological experiments is being stored in publicly available repositories. It is then often of interest for researchers to retrieve experimental datasets with relevance to a given experiment, in order to increase the power of statistical analyses and to be able to make novel findings not obtainable from one experiment alone. The current standard practice relies on searching for relevant experiments by keyword annotations (e.g. Zhu et al., 2008). However, despite efforts to maintain compliance with standard formats of documenting experiments, e.g. the MIAME standard (Brazma, 2001), information about experiments may often be missing, insufficient or suffer from variations in terminology (e.g. Baumgartner et al., 2007; Schmidberger et al., 2011). In view of the challenges associated with keyword-based retrieval, the complementary task of querying a database of experiments using measurement data, instead of keywords, has recently received increased attention in the literature.

Most earlier content-driven methods used for retrieval of gene expression data represent each experiment in terms of a profile over genes, or alternatively, over known gene sets or gene modules predicted from other data sources, see Hunter et al. (2001); Fujibuchi et al. (2007); Caldas et al. (2009); Engreitz et al. (2010); Georgii et al. (2012) and references therein. A representative example is to compute differential expression profiles of case vs. control, use the correlation between activity profiles as the measure of relevance, and retrieve the experiments with the highest correlations (e.g. Engreitz et al., 2010). This requires auxiliary information about the experiments, namely case and control labels of experiment samples, and possibly additional a priori defined sets of important genes. In the context of gene expression time series, representative examples of retrieving gene expression profiles include Smith et al. (2008) and Hafemeister et al. (2011).

Recently, two feasibility studies have gone beyond reducing experiments into single profiles by using probabilistic modelling of the experiments in the database being queried. Faisal et al. (2014), assumed that the query dataset can be explained as a mixture of the learnt models, each model learnt from one dataset, such that the measure of relevance is given by the inferred mixture weights. In a slightly different approach (Seth et al., 2014), experiments were retrieved by evaluating the posterior marginal likelihoods, given the query data, of individual models stored for the experiments in the database.

In this paper, we introduce a method for retrieving full datasets, i.e. experiments consisting of multiple samples, which is also based on probabilistic modelling. However, instead of using the query dataset itself as a query, we use a model learnt from it. The measure of relevance is therefore not a likelihood, but instead a suitably defined metric between the models. The argument is that for noisy and complex datasets, it is beneficial to extract relevant characteristics of the query dataset in the same way as was done with the datasets that are being queried. We also make explicit the importance of marginalizing out nuisance parameters which are not directly relevant for the retrieval task. For example, in a gene expression study, one is often more interested in how sets of genes are co-regulated, rather than their exact expression values which are additionally affected by numerous other influences. We tackle the specific problem of retrieving gene expression experiments by using a product partition model (Jordan et al., 2007) to cluster together genes that show similar expression patterns across a number of samples. By integrating out expression levels of the gene sets (i.e., cluster-specific information), only the co-expression patterns revealed by the clustering structure are retained. The clustering induced by the query dataset is then finally compared with the clusterings associated with the experiments in the database using the normalized information distance (Vinh et al., 2010). Notice that this approach does not involve any “training stage”, compared to that of Seth et al. (2014), and the retrieval step does not involve solving an optimization problem, compared to Faisal et al. (2014).

While gene clustering has a long history in characterizing gene expression datasets (Eisen et al., 1999; D’haeseleer, 2005), it appears not to have been used in the context of experiment retrieval before. The use of gene clustering provides a straightforward way of characterizing each experiment with minimal preprocessing of the data while capturing central co-expression patterns. Essentially all previous approaches for retrieving gene expression data have converted the data to differential expression (or gene set enrichments) requiring fixed and known case-control distinctions. In contrast, we have only applied standard quality control and RMA normalization steps carried out in-house at the European Bioinformatics Institute (EBI) for datasets in the Expression Atlas database (see Petryszak et al., 2014). Our experimental evaluation further suggests that, for the current application, inference of the full probabilistic model can be approximated by some computationally faster heuristic clustering algorithm, such as -means (see Appendix A). The computational simplicity makes the method highly scalable and easy to apply in a black-box manner, as a general-purpose retrieval scheme.

2 Approach

Let denote a data matrix from some experiment of interest, and let be a database of datasets from previously conducted experiments. The aim is to retrieve datasets from among the with similar characteristics as the query dataset . Due to the complex nature of the data, there is no single sensible or obvious way of comparing datasets (matrices of possibly different sizes). We propose using a model to characterize each dataset, with the aim of reducing noise and making relevant aspects of the data more tangible, while making the experiments comparable. The retrieval task then consists in ranking the models , inferred from , with respect to their similarity with the query model inferred from . Note that in a broad sense, the commonly used differential expression can be considered as one model type, and clustering as another.

To elaborate on the above idea further, we will now assume that the data generating mechanism of each dataset can be represented in terms of a probabilistic model with density in some family . Often, the parameter can be decomposed as , where is the parameter or characteristic of interest (e.g., gene clusters) and is a nuisance parameter (e.g., average expression level of the gene cluster). Marginalizing out (integrating the density over) then yields a model family completely determined by . Making this operation explicit, the key quantity used in inferring a representative model for a dataset is the marginal likelihood,

(1)

where is a prior density on . Ideally, we would then proceed with a fully Bayesian approach to infer a posterior density (or distribution) over , and use to characterize . However, for computational reasons we will here choose only a single element of to represent . Under zero-one loss, the optimal choice is then the maximum a posteriori (MAP) solution

(2)

where is a prior over . Accordingly, we now define the representative model for as .

If a suitable function can be defined for the pairwise relations between the elements of the model space , a natural ranking among will be induced by evaluating for all . For coherence of the ranking scheme, we will make a further assumption that is a metric. That is, for all , we require that

With the above conditions satisfied, the function conforms to the intuition of a distance, and furthermore, provides a solid foundation for the design of data structures and algorithms, as the model space

forms a metric space. We finally note that metrics are also available for probability distributions, making the described framework applicable in cases where computational resources allow for representing the elements of

as full posterior distributions.

3 Methods

3.1 Probabilistic model for gene clustering

The first task in constructing a retrieval scheme is to choose an appropriate model for the experiments. While several different approaches, with varying aims and assumptions, exist for modelling gene expression data, a particularly simple and frequently used approach is that of gene clustering (e.g. D’haeseleer, 2005), which seeks to cluster together genes that show similar expression patterns across a number of samples. Here, we use a probabilistic clustering approach which simultaneously infers both the number of clusters as well as the optimal clustering structure.

Consider first a gene expression data matrix of dimension , where is the number of genes and is the number of samples. A clustering is a partition of the set into non-empty and non-overlapping subsets, or clusters, such that and , for . We focus here on a probabilistic formulation of clustering, which makes explicit use of partition structures, namely the product partition model (PPM). Technically, PPM assumes that items in the same cluster are exchangeable and items in different clusters are independent (see Jordan et al., 2007). Using the terminology of Section 2, the parameter of interest for this model is the partition structure

, while the nuisance parameter is a vector of cluster-specific model parameters,

. This leads to a marginal likelihood of the form (see Equation (1))

(3)

where denotes the subset of which is indexed by . Note that the assumption of independence between clusters entails constructing the marginal likelihood as a product of cluster-specific components.

The prior distribution for will likewise be constructed as a product,

(4)

where ensures normalization to 1 over the model space and for all subsets . Note that (4

) actually specifies the joint distribution for

and , but since the latter is implied by the former, we omit from the notation. It can be shown that a PPM with and chosen such that

(5)

where is the number of observations in cluster and controls the tendency to form new clusters, can be obtained by integrating out the model parameters in a Dirichlet process mixture model (Dahl, 2009).

The cluster-specific marginal likelihoods in Equation (3) can in principle take any suitable form. Here, we assume that for , , , the observations in each sample are independently generated from with a conjugate prior on the unknown model parameters. Furthermore, we make the simplistic assumption that the samples themselves are independent, conditional on a cluster assignment (see Hand and Yu, 2001, for a discussion about the implications of this assumption in a classification context). The resulting cluster-specific marginal likelihoods may then be written as

(6)

where

Blomstedt et al. (2015) introduced a PPM for clustering mixed discrete and continuous data, where the continuous component was of form (6). Following their implementation, we normalize each column of the data matrix

to have zero mean and unit variance, and set the hyperparameter values to

and . Furthermore, the model is equipped with a prior of the form (5), with . Finally, combining Equations (3)–(6), an optimal clustering w.r.t. a dataset is given by the MAP solution (see Equation (2))

(7)

3.1.1 Inference

To find the optimal clustering as defined in Equation (7), we use a stochastic greedy search algorithm, which moves in the model space by successive application of move, split and merge operators; for further details, see Blomstedt et al. (2015)

. While being more efficient for the optimization task than standard Markov chain Monte Carlo methods, for large amounts of data the algorithm still requires a considerable amount of computation time. To that end, some computational simplifications based on heuristic clustering procedures will be discussed in Appendix

A.

3.2 Distance metric for clusterings

Assuming now that each of the experiments in a database has been represented with a clustering , the remaining task is to find a function which can be defined on and satisfies conditions (M1)–(M4) above. In recent years, a new generation of information-theoretic distance measures has emerged (see e.g. Meilă, 2007; Vinh et al., 2010), which possess many desirable properties, such as the metric property, and which have been employed because of their strong mathematical foundation and ability to detect non-linear similarities.

Vinh et al. (2010) conducted a systematic comparison of information-theoretic distance measures, concluding that the preferred “general-purpose” measure for comparing clusterings is the normalized information distance, later denoted . To give a definition of this measure, we first introduce some notation. Briefly, for two clusterings and , the number of items co-occurring in clusters and is given by , with . The marginal sums are denoted by and . A key realization in the derivation of information-theoretic distance measures is that each clustering induces an empirical probability distribution over the set , such that the probability of a randomly chosen item being in cluster is given by . Similarly, the joint probability of the pair co-occurring in clusters and is given by . The entropy of a clustering , describing the uncertainty associated with assigning items into the clusters of , is then formulated as

The mutual information of clusterings and , which measures how much having knowledge of reduces (or vice versa), is further defined as

It can also be interpreted as a measure of dependence in the sense that if and are independent, then . Finally, from the above quantities we obtain as

(8)

4 Results

4.1 Data and experimental setup

To evaluate the modelling-based retrieval scheme developed in Sections 2 and 3, we used as a starting point all differential expression experiments conducted on the A-AFFY-44 affymetrix genechip available in Expression Atlas (EA; http://www.ebi.ac.uk/gxa, see Petryszak et al., 2014) as of 4-Jun-2014. Only experiments with both measurement data and analytics data available were considered. Furthermore, experiments with a very small number of genes were discarded. Since most experiments had expression measurements for more than genes, this number was set as the lower limit. Based on the above selection process we obtained an initial set of 447 experiments. In a second stage, we selected a subset of these experiments based on the availability of experimental factor ontologies (EFO; http://www.ebi.ac.uk/efo/, see Malone et al., 2010), which were used as ground truth in the evaluation. More specifically, we retained those experiments which had at least one of the EFO types “cell type”, “disease” or “organism part” present. Moreover, experiments having multiple values for a given EFO type were excluded, and finally only experiments with the same EFO value present in at least two experiments were included in this study, resulting in a final set of 251 experiments (for a list of accession numbers, see Appendix C). The number of samples per experiment varied between 6 and 353, the median number of samples being 22.

Out of the final set of 251 experiments, three partly ovelapping subsets corresponding to each of the EFO types were formed. These consisted of 103 experiments with values recorded for “cell type”, 76 with values for “disease” and 174 with values for “organism part”. The number of different EFO values in these sets of experiments were 23, 19 and 32, respectively. In retrieving full experiments, those experiments having the same EFO value were considered relevant, and other experiments irrelevant. Note that the above EFO types were not the main conditions of interest on which differential gene expression had been studied in the experiments, but were chosen to give a more general description of the experiments. A more complete ground truth was not readily available as most other EFO types were only present in small subsets of the experiments. Retrival performance was measured using precision and recall, taken as an average of successively using each of the experiments as a query to retrieve among the remaining experiments.

In order to reduce the number of genes for clustering, we initially selected for each of the 251 experiments the top 5 genes resulting from a ‘non-specific’ search in EA, in which genes with the highest absolute values of -statistics in any available contrast come first, irrespective of whether they are reported with high -statistics in the remaining contrasts (for further details about listing genes in EA, see Petryszak et al., 2014). Finally, by taking the union of these genes over all experiments, we arrived at genes per experiment. The selection process per se is not an essential part of our approach but done for computational convenience only. In a preliminary stage of our analyses, we experimented with different numbers of genes but found that this only had a minor impact on the results, see Appendix B for further details.

4.2 Comparison of retrieval schemes

We will now proceed to evaluating the performance of the retrieval approach proposed in Section 2. For gene expression data, we learn for each experiment a Gaussian product partition model (PPM) which implies a clustering over genes, see Section 3. The clustering learned from the query data is then related to the clusterings by evaluating the distances , , see Equation (8). This approach will be contrasted with two alternative approaches for content-based retrieval previously suggested in the literature. The first one of these is closely related to the proposed approach in that it learns a PPM for each experiment in the database. However, instead of evaluating distances, it evaluates the marginal likelihoods of the learnt models, given the query dataset. A higher likelihood is then an indication of a higher relevance to the query dataset. A similar approach, albeit for a different model family, was recently suggested in Seth et al. (2014). The term “modelling-based retrieval” has previously been used by Faisal et al. (2014) to describe an approach based on probabilistic modelling but using a likelihood as the measure of relevance. To make a distinction between the approach proposed here and approaches based on evaluating likelihoods, we will in this comparison refer to the former as model-distance-based retrieval and the latter as likelihood-based retrieval. See Section 5 for a further discussion about the differences between the two approaches.

The second alternative approach, differential expression based retrieval, assumes that a statistical test to detect differentially expressed genes has been conducted beforehand. The method is then based on correlating the gene-specific differential expression -values of the query experiment with those of the database experiments. An approach similar to this was suggested by Engreitz et al. (2010). If targeted at differential expression profiles obtained under specific conditions known to be important, this scheme has much potential to achieve good retrieval performance. On the other hand, it assumes more background knowledge and preprocessing of the data than the suggested retrieval schemes based on gene clustering. Here, we do not assume a specific condition of interest but choose in each experiment for the selected 1125 genes the smallest -values under any of the conditions tested and reported in Expression Atlas. We also experimented with a much larger set of genes, constituting the maximal common set of genes tested in all experiments, but this resulted in slightly inferior performance. The correlation measure used was Pearson’s correlation. We finally note that differential expression based retrieval schemes can also be formulated under the general framework of Section 2 using some appropriate probabilistic model for differential expression, as formulated in e.g. Do et al. (2006).

The results of the comparison between the retrieval schemes are shown in Figure 1. Here, the model-distance-based retrieval scheme clearly outperforms the two other schemes. A notable feature of the results is the surprisingly poor performance of the likelihood-based approach. This may be due to the well-known fact that gene expression measurements tend to be extremely noisy. In essence, the marginal likelihood measures how well the query dataset is predicted by a model , learnt from dataset . Even if experiments and are in some way related, the idealized model may still not provide a good prediction for data . Therefore, instead of using the complex and possibly very noisy dataset as query input, retaining only the characteristics relevant for retrieval in both and may help to improve performance, as illustrated in the results.

(a) Cell type
(b) Disease
(c) Organism part
Figure 1: Precision-recall curves comparing model-distance-based, likelihood-based, and differential expression (DE) based retrieval using three EFO types (a–c) as ground truth.

4.3 Biological information in gene clustering

Any single EFO type will necessarily capture only one aspect of an experiment, whereas a meaningful retrieval task usually involves an evaluation of relevance between experiments in terms of a combination of aspects. It is therefore of interest to study the effect of composing the ground truth as a combination of multiple EFO types. In the current experimental setup, the ground truth for each of the EFO types “cell type”, “disease” and “organism part”, can be represented as a symmetric binary matrix of dimension , such that entry experiments and are mutually relevant. A ground truth which requires a match in EFO types can then be formed by summing the three matrices and requiring .

In Figure 2, the model-distance-based retrieval scheme is evaluated against ground truth relevances requiring (a) any EFO type to match () (b) two or more matches () and (c) all EFO types to match (). The number of experiments satisfying these conditions are 251, 54 and 6, respectively. Intuitively, the ground truth can be considered increasingly informative as the number of matching EFO types required to declare relevance increases. A retrieval scheme capturing biologically relevant information should then be in better agreement with a more informative ground truth. Although the curves of Figures 1(a) and 1(b) are not directly comparable due to the differing number of experiments used, the shape of the latter gives an indication of a better agreement. In Figure 1(c), owing to the small number of available experiments, the ground truth is compared with the single most relevant experiment (out of five possible ones) retrieved for each query. Here, the retrieval result matches the ground truth in four of the six queries.

(a)
(b)
(c)
Figure 2: Evaluation of model-distance-based retrieval scheme with respect to a ground truth requiring (a) at least one, (b) at least two, (c) exactly three matching EFO types. The rightmost subfigure compares the ground truth matrix (hollow squares) with the single most relevant retrieved experiment per query (solid squares) for the six experiments having a simultaneous match in all three EFO types. Accession numbers for the experiments are provided as a reference.

4.4 Annotations and gene clustering combined

As noted previously in Section 1, information about experiments may often be missing, insufficient or suffer from variations in terminology (Baumgartner et al., 2007; Schmidberger et al., 2011) despite a formal declaration of compliance with MIAME criteria (Brazma, 2001). Hence, even in cases where keyword-based retrieval is of primary interest, it may be advantageous to complement a query with information provided by gene clustering. A straightforward way of combining these two types of information is the following. Assume that a database of experiments is being queried and that experiments are found to match the keyword query. More formally, the result can be encoded as a binary vector of length with elements having value 1. A model-distance-based retrieval scheme, on the other hand will return a vector of length with each element representing the distance of the corresponding experiment-specific model to the query model. Element-wise multiplication of these vectors then effectively induces a ranking of the experiments retrieved in the keyword-based query. The underlying idea is that this ranking will reflect some information which is not present in the queried keyword(s) alone.

To test the combined method, we considered all experiments matching in both “cell type” and “organism part”, resulting in a total of 43 experiments (all other combinations of two EFO types resulted in significantly less experiments). A match in both of these EFO types was used as ground truth. The idea was then to retrieve experiments assuming only one of the EFO types to be known, complementing keyword-based retrieval with rankings from model-distance-based retrieval. Retrieving experiments assuming only “cell type” to be known resulted in an average precision of 0.55 for keyword-based retrieval and 0.61 (mean average precision) for combined retrieval, the corresponding numbers being 0.81 and 0.84, respectively, when only “organism part” was assumed to be known. In both cases we were able to see a slight improvement in performance for the combined approach, suggesting that keyword-based retrieval may benefit from being complemented with auxiliary information, such as gene clustering.

5 Discussion

In this paper, we have introduced a general probabilistic framework for content-driven retrieval of experimental datasets. Compared to earlier works which also employ probabilistic modelling (e.g. Caldas et al., 2009, 2012; Faisal et al., 2014; Seth et al., 2014), we do not use the likelihood of the query data as a measure of relevance, but instead learn a model of the query data and compare models. We argue that this reduces noise in the query input. With nuisance parameters further marginalized out, only characteristics relevant for the retrieval task are retained. A special instance of the general framework introduced in this paper has been previously used as a comparative method in a simulation study (Seth et al., 2014) with performance slightly inferior to a likelihood-based approach. The simulation setting in that earlier study was, however, very simplistic compared to datasets encountered in many real-life scenarios, such as that of Section 4, where the model-distance-based approach was now seen to clearly outperform its likelihood-based counterpart.

Contrary to likelihood-based approaches, the model-distance-based approach requires all models under consideration to belong to the same family. Although this may seem somewhat restrictive, in particular for the potential future scenario in which individual researchers independently store models in a repository along with their datasets (e.g. Faisal et al., 2014), there are also scenarios where the assumption is feasible. Datasets which arise as a result of some specific type of experiment are often in practice modelled using a fairly standardized set of approaches. In particular, if the models are constructed automatically, or by a curator of a data repository, the assumption of the models belonging to the same family is feasible.

As a specific application of the general framework, in Sections 3 and 4 we proposed a retrieval scheme for gene expression experiments based on gene clustering. It turned out that clustering is even a surprisingly good model for this purpose; with minimal preprocessing and prior knowledge about the experiments, it is able to yield reasonable retrieval performance (Section 4.2) and to capture biologically relevant characteristics about the experiments (Section 4.3). Finally, we showed that it is straightforward to combine model-distance-based (or any modelling-based) retrieval with retrieval using available keywords (Section 4.4).

Acknowledgement

The authors would like to thank Ugis Sarkans for providing useful information about Expression Atlas. This work was financially supported by the Academy of Finland (Finnish Centre of Excellence in Computational Inference Research COIN, grant no 251170).

References

  • Baumgartner et al. (2007) Baumgartner, Jr., W. A., Cohen, K. B., Fox, L. M., Acquaah-Mensah, G., and Hunter, L. (2007). Manual curation is not sufficient for annotation of genomic databases. Bioinformatics, 23, i41–i48.
  • Blomstedt et al. (2015) Blomstedt, P., Tang, J., Xiong, J., Granlund, C., and Corander, J. (2015). A Bayesian predictive model for clustering data of mixed discrete and continuous type. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 489–498.
  • Brazma (2001) Brazma, A. (2001). Minimum information about a microarray experiment (MIAME) – towards standards for microarray data. Nature Genetics, 29, 365–71.
  • Caldas et al. (2009) Caldas, J., Gehlenborg, N., Faisal, A., Brazma, A., and Kaski, S. (2009). Probabilistic retrieval and visualization of biologically relevant microarray experiments. Bioinformatics, 25(12), i145–i153.
  • Caldas et al. (2012) Caldas, J., Gehlenborg, Kettunen, E., Faisal, A., Rönty, M., Nicholson, A. G., Knuutila, S., Brazma, A., and Kaski, S. (2012). Data-driven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma. Bioinformatics, 28(2), 246–253.
  • Dahl (2009) Dahl, D. B. (2009). Modal clustering in a class of product partition models. Bayesian Analysis, 4(2), 243–264.
  • D’haeseleer (2005) D’haeseleer, P. (2005). How does gene expression clustering work? Nature Biotechnology, 23(12), 1499–1501.
  • Do et al. (2006) Do, K.-A., Müller, P., and Vannucci, M., editors (2006). Bayesian Inference for Gene Expression and Proteomics. Cambridge University Press, Cambridge, UK.
  • Eisen et al. (1999) Eisen, M. B., Spellman, P. T., Brown, P. O., and Botstein, D. (1999). Cluster analysis and display of genome-wide expression patterns. PNAS, 95, 14863–14868.
  • Engreitz et al. (2010) Engreitz, J. M., Morgan, A. A., Dudley, J. T., Chen, R., Thathoo, R., Altman, R. B., and Butte, A. J. (2010). Content-based microarray search using differential expression profiles. BMC Bioinformatics, 11(603).
  • Faisal et al. (2014) Faisal, A., Peltonen, J., Georgii, E., Rung, J., and Kaski, S. (2014). Toward computational cumulative biology by combining models of biological datasets. PLoS ONE, 9(11), e113053.
  • Fujibuchi et al. (2007) Fujibuchi, W., Kiseleva, L., Taniguchi, T., Harada, H., and Horton, P. (2007). Cellmontage: similar expression profile search server. Bioinformatics, 23(22), 3103–3104.
  • Georgii et al. (2012) Georgii, E., Salojärvi, J., Brosché, M., Kangasjärvi, J., and Kaski, S. (2012). Targeted retrieval of gene expression measurements using regulatory models. Bioinformatics, 28(18), 2349–2356.
  • Hafemeister et al. (2011) Hafemeister, C., Costa, I. G., Schonhuth, A., and Schliep, A. (2011). Classifying short gene expression time-courses with Bayesian estimation of piecewise constant functions. Bioinformatics, 27(7), 946–952.
  • Hand and Yu (2001) Hand, D. J. and Yu, K. (2001). Idiot’s Bayes – Not so stupid after all? International Statistical Review, 69(3), 385–398.
  • Hunter et al. (2001) Hunter, L., Taylor, R. C., Leach, S. M., and Simon, R. (2001). GEST: a gene expression search tool based on a novel Bayesian similarity metric. Bioinformatics, 17(Suppl 1), S115–S122.
  • Jaskowiak et al. (2014) Jaskowiak, P. A., Campello, R. J. G. B., and Costa, I. G. (2014). On the selection of appropriate distances for gene expression data clustering. BMC Bioinformatics, 15(Suppl 2), S2.
  • Jordan et al. (2007) Jordan, C., Livingstone, V., and Barry, D. (2007). Statistical modelling using product partition models. Statistical Modelling, 7(3), 275–295.
  • Malone et al. (2010) Malone, J., Holloway, E., Adamusiak, T., Kapushesky, M., Zheng, J., Kolesnikov, N., Zhukova, A., Brazma, A., and Parkinson, H. (2010). Modeling sample variables with an experimental factor ontology. Bioinformatics, 26(8), 1112–1118.
  • Meilă (2007) Meilă, M. (2007). Comparing clusterings–an information based distance.

    Journal of Multivariate Analysis

    , 98, 873–895.
  • Petryszak et al. (2014) Petryszak, R., Burdett, T., Fiorelli, B., Fonseca, N., Gonzalez-Porta, M., Hastings, E., Huber, W., Jupp, S., Keays, M., Kryvych, N., McMurry, J., Marioni, J., Malone, J., Megy, K., Rustici, G., Tang, A. Y., Taubert, J., Williams, E., Mannion, O., Parkinson, H. E., and Brazma, A. (2014). Expression Atlas update–a database of gene and transcript expression from microarray- and sequencing-based functional genomics experiments. Nucleic Acids Research, 42(Database issue), D926–32.
  • Schmidberger et al. (2011) Schmidberger, M., Lennert, S., and Mansmann, U. (2011). Conceptual aspects of large meta-analyses with publicly available microarray data: a case study in oncology. Bioninformatics and Biology Insights, 5, 13–39.
  • Seth et al. (2014) Seth, S., Shawe-Taylor, J., and Kaski, S. (2014). Retrieval of experiments by efficient comparison of marginal likelihoods. In C. Loo, K. Yap, K. Wong, A. Teoh, and K. Huang, editors, Neural Information Processing, volume 8835 of Lecture Notes in Computer Science, pages 135–142. Springer International Publishing.
  • Smith et al. (2008) Smith, A. A., Vollrath, A., Bradfield, C. A., and Craven, M. (2008). Similarity queries for temporal toxicogenomic expression profiles. PLoS Comput Biol, 4(7), e1000116.
  • Vinh et al. (2010) Vinh, N. X., Epps, J., and Bailey, J. (2010). Information theoretic measures for clusterings comparison: Variants, properties, normalization and correction for chance.

    Journal of Machine Learning Research

    , 11, 2837–2854.
  • Zhu et al. (2008) Zhu, Y., Davis, S., Stephens, R., S., M. P., and Chen, Y. (2008). GEOmetadb: powerful alternative search engine for the Gene Expression Omnibus. Bioinformatics, 24(23), 2798–2800.

Appendix

Appendix A Simplified search for an optimal clustering

Recall that a product partition model (PPM) is a probabilistic model which implies a clustering of data items into non-empty and non-overlapping subsets. Given a dataset , an optimal clustering is given by the maximum a posteriori solution

where denotes the space of all possible clusterings of . Since the cardinality of the model space grows very quickly with

, an exhaustive evaluation of all posterior probabilities

, , is not feasible in practice (for instance, with , we already have ). Therefore a stochastic greedy search algorithm was implemented in the analyses of Section 4 to find the optimal clustering for each dataset. While being more efficient for the optimization task than standard Markov chain Monte Carlo methods, for large amounts of data the algorithm still requires a considerable amount of computation time.

One possible simplification is to restrict the search to a subset of the model space by choosing a set of potentially good solutions in advance, and then selecting the optimal solution among them as

(9)

A straightforward way of finding a suitable is to only consider solutions found by one or several different heuristic clustering algorithms. These algorithms are usually fast to execute but provide no measure of uncertainty regarding the obtained solution and require the number of clusters to be fixed in advance. Running such an algorithm for all values of will reduce the cardinality of the search space to , which in many cases is small enough to enable an exhaustive evaluation of the posterior probabilities of all clusterings in . Even a combination of, say, different algorithms still yields a model space with a cardinality of only .

To further reduce the scope of the search, the range of for which heuristic solutions are obtained may be restricted to an interval in which plausible solutions are likely to be found. For instance, in analysing how different distances and clustering methods interact regarding their ability to cluster gene expression, Jaskowiak et al. (2014) conducted a comparison for clusterings generated in the interval , rather than the full range of values for . In our current application, we additionally experimented with restricting to fixed value, which trivially reduces the model space to a single clustering. In this case, as the number of clusters is not chosen adaptively for each dataset, the clusterings no longer provide biologically meaningful groupings of the genes but may still give a sufficient characterization of the experiments for purposes of retrieval. This is demonstrated in Figure 3, where retrieval based on the optimal clustering in the full model space is compared with that in a reduced model space of -means solutions in , as well as a trivial model space , consisting of only one -means solution with , corresponding to the midpoint of the interval used by Jaskowiak et al. (2014).

The quality of the solution in (9) depends on how well the clusterings in (or ) correspond to those clusterings in which have a high probability under the PPM formulation. Figure 4 shows a comparison of the retrieval performance of various heuristic clustering algorithms, with the number of clusters fixed for simplicity at , and using PPM as baseline. The results indicate that heuristic algorithms which are based on a Euclidean distance measure (e.g. -means with squared Euclidean distance and complete linkage (CL) with Euclidean distance) yield retrieval performance which closest matches that obtained using the Gaussian PPM. Although similar behaviour may be expected in other datasets of the same type, the conclusion is, however, data-specific and should not be generalized beyond the scope of the current data without further validation.

(a) Cell type
(b) Disease
(c) Organism part
Figure 3: Retrieval performance using clusterings found in full , reduced , and trivial model spaces.
(a) Cell type
(b) Disease
(c) Organism part
Figure 4: Retrieval performance for various heuristic clustering approaches, using PPM as baseline.

Appendix B Impact of number of genes

In Section 4.1, the number of genes for clustering was reduced by initially selecting for each experiment the top 5 genes resulting from a ‘non-specific’ search in Expression Atlas (http://www.ebi.ac.uk/gxa, see Petryszak et al., 2014). Taking the union of these genes over all 251 experiments finally resulted in 1125 genes per experiment. To study the impact of the number of genes included in each dataset, we repeated the same procedure for the top 10 and top 25 genes, resulting in 2117 and 4740 genes per experiment, respectively. Due to the large number of genes, in particular in the last group, a simplified search scheme for clusterings was employed as described in the previous section, using -means with squared Euclidean distance measure and . Figure 5 suggests that the number of genes chosen only has a minor impact on retrieval performance.

(a) Cell type
(b) Disease
(c) Organism part
Figure 5: Retrieval performance for different numbers of genes included for clustering.

Appendix C Experiment accession numbers

Accession numbers of the 251 experiments selected for the analyses:


E-GEOD-10070, E-GEOD-10233, E-GEOD-10289, E-GEOD-10311, E-GEOD-10315, E-GEOD-10595, E-GEOD-10696, E-GEOD-10718, E-GEOD-10780, E-GEOD-10799, E-GEOD-10820, E-GEOD-10821, E-GEOD-10831, E-GEOD-10879, E-GEOD-10890, E-GEOD-10896, E-GEOD-10916, E-GEOD-10971, E-GEOD-10979, E-GEOD-11057, E-GEOD-11166, E-GEOD-11199, E-GEOD-11281, E-GEOD-11309, E-GEOD-11324, E-GEOD-11348, E-GEOD-11352, E-GEOD-11428, E-GEOD-11755, E-GEOD-11761, E-GEOD-11783, E-GEOD-11839, E-GEOD-11886, E-GEOD-11919, E-GEOD-11941, E-GEOD-11959, E-GEOD-12034, E-GEOD-12108, E-GEOD-12113, E-GEOD-12121, E-GEOD-12172, E-GEOD-12251, E-GEOD-12254, E-GEOD-12264, E-GEOD-12265, E-GEOD-12287, E-GEOD-12355, E-GEOD-12408, E-GEOD-12452, E-GEOD-12710, E-GEOD-13487, E-GEOD-13501, E-GEOD-13548, E-GEOD-13637, E-GEOD-13762, E-GEOD-13763, E-GEOD-13837, E-GEOD-13899, E-GEOD-13911, E-GEOD-13975, E-GEOD-13987, E-GEOD-14001, E-GEOD-14017, E-GEOD-14278, E-GEOD-14383, E-GEOD-14390, E-GEOD-14479, E-GEOD-14924, E-GEOD-14926, E-GEOD-14973, E-GEOD-15271, E-GEOD-15389, E-GEOD-15543, E-GEOD-15645, E-GEOD-15811, E-GEOD-15947, E-GEOD-16020, E-GEOD-16214, E-GEOD-16237, E-GEOD-16363, E-GEOD-1643, E-GEOD-16515, E-GEOD-16728, E-GEOD-16797, E-GEOD-16836, E-GEOD-16837, E-GEOD-17251, E-GEOD-17385, E-GEOD-17400, E-GEOD-17636, E-GEOD-17743, E-GEOD-17763, E-GEOD-18018, E-GEOD-18791, E-GEOD-18842, E-GEOD-18913, E-GEOD-18995, E-GEOD-19067, E-GEOD-19293, E-GEOD-19639, E-GEOD-19665, E-GEOD-19784, E-GEOD-19804, E-GEOD-19826, E-GEOD-19864, E-GEOD-19982, E-GEOD-20114, E-GEOD-20540, E-GEOD-20948, E-GEOD-21261, E-GEOD-22152, E-GEOD-22229, E-GEOD-22513, E-GEOD-22544, E-GEOD-22563, E-GEOD-22779, E-GEOD-23031, E-GEOD-23687, E-GEOD-23806, E-GEOD-2397, E-GEOD-23984, E-GEOD-25518, E-GEOD-2634, E-GEOD-26495, E-GEOD-26673, E-GEOD-27034, E-GEOD-2706, E-GEOD-27187, E-GEOD-31193, E-GEOD-32719, E-GEOD-3467, E-GEOD-34748, E-GEOD-34880, E-GEOD-3526, E-GEOD-35972, E-GEOD-36547, E-GEOD-3678, E-GEOD-3744, E-GEOD-3998, E-GEOD-4567, E-GEOD-4600, E-GEOD-4655, E-GEOD-4883, E-GEOD-4888, E-GEOD-5040, E-GEOD-5230, E-GEOD-5264, E-GEOD-5372, E-GEOD-5679, E-GEOD-6054, E-GEOD-6241, E-GEOD-6400, E-GEOD-6764, E-GEOD-7011, E-GEOD-7216, E-GEOD-7224, E-GEOD-7392, E-GEOD-7440, E-GEOD-7509, E-GEOD-7515, E-GEOD-7538, E-GEOD-7568, E-GEOD-7586, E-GEOD-7696, E-GEOD-7708, E-GEOD-7869, E-GEOD-7890, E-GEOD-8023, E-GEOD-8121, E-GEOD-8167, E-GEOD-8514, E-GEOD-8527, E-GEOD-8597, E-GEOD-8658, E-GEOD-8823, E-GEOD-8961, E-GEOD-8977, E-GEOD-9171, E-GEOD-9489, E-GEOD-9517, E-GEOD-9599, E-GEOD-9649, E-GEOD-9692, E-GEOD-9894, E-MEXP-1103, E-MEXP-1171, E-MEXP-1230, E-MEXP-1243, E-MEXP-1290, E-MEXP-1337, E-MEXP-1372, E-MEXP-1389, E-MEXP-1403, E-MEXP-1412, E-MEXP-1425, E-MEXP-1482, E-MEXP-1512, E-MEXP-1599, E-MEXP-1601, E-MEXP-1741, E-MEXP-1838, E-MEXP-1857, E-MEXP-1956, E-MEXP-1958, E-MEXP-2010, E-MEXP-2034, E-MEXP-2055, E-MEXP-2069, E-MEXP-2083, E-MEXP-2115, E-MEXP-2128, E-MEXP-2236, E-MEXP-2340, E-MEXP-2351, E-MEXP-2360, E-MEXP-2375, E-MEXP-2590, E-MEXP-2657, E-MEXP-3479, E-MEXP-3577, E-MEXP-3756, E-MEXP-3810, E-MEXP-555, E-MEXP-561, E-MEXP-563, E-MEXP-858, E-MEXP-884, E-MEXP-930, E-MEXP-935, E-MEXP-964, E-MEXP-980, E-MEXP-987, E-MEXP-993, E-MTAB-1131, E-MTAB-317, E-MTAB-372, E-MTAB-874, E-TABM-1020, E-TABM-1029, E-TABM-1138, E-TABM-1208, E-TABM-234, E-TABM-276, E-TABM-282, E-TABM-311, E-TABM-440, E-TABM-577, E-TABM-601, E-TABM-666, E-TABM-763, E-TABM-898