1 Introduction
As the use of highthroughput molecular measurement technologies continues to spread, an ever increasing amount of data from biological experiments is being stored in publicly available repositories. It is then often of interest for researchers to retrieve experimental datasets with relevance to a given experiment, in order to increase the power of statistical analyses and to be able to make novel findings not obtainable from one experiment alone. The current standard practice relies on searching for relevant experiments by keyword annotations (e.g. Zhu et al., 2008). However, despite efforts to maintain compliance with standard formats of documenting experiments, e.g. the MIAME standard (Brazma, 2001), information about experiments may often be missing, insufficient or suffer from variations in terminology (e.g. Baumgartner et al., 2007; Schmidberger et al., 2011). In view of the challenges associated with keywordbased retrieval, the complementary task of querying a database of experiments using measurement data, instead of keywords, has recently received increased attention in the literature.
Most earlier contentdriven methods used for retrieval of gene expression data represent each experiment in terms of a profile over genes, or alternatively, over known gene sets or gene modules predicted from other data sources, see Hunter et al. (2001); Fujibuchi et al. (2007); Caldas et al. (2009); Engreitz et al. (2010); Georgii et al. (2012) and references therein. A representative example is to compute differential expression profiles of case vs. control, use the correlation between activity profiles as the measure of relevance, and retrieve the experiments with the highest correlations (e.g. Engreitz et al., 2010). This requires auxiliary information about the experiments, namely case and control labels of experiment samples, and possibly additional a priori defined sets of important genes. In the context of gene expression time series, representative examples of retrieving gene expression profiles include Smith et al. (2008) and Hafemeister et al. (2011).
Recently, two feasibility studies have gone beyond reducing experiments into single profiles by using probabilistic modelling of the experiments in the database being queried. Faisal et al. (2014), assumed that the query dataset can be explained as a mixture of the learnt models, each model learnt from one dataset, such that the measure of relevance is given by the inferred mixture weights. In a slightly different approach (Seth et al., 2014), experiments were retrieved by evaluating the posterior marginal likelihoods, given the query data, of individual models stored for the experiments in the database.
In this paper, we introduce a method for retrieving full datasets, i.e. experiments consisting of multiple samples, which is also based on probabilistic modelling. However, instead of using the query dataset itself as a query, we use a model learnt from it. The measure of relevance is therefore not a likelihood, but instead a suitably defined metric between the models. The argument is that for noisy and complex datasets, it is beneficial to extract relevant characteristics of the query dataset in the same way as was done with the datasets that are being queried. We also make explicit the importance of marginalizing out nuisance parameters which are not directly relevant for the retrieval task. For example, in a gene expression study, one is often more interested in how sets of genes are coregulated, rather than their exact expression values which are additionally affected by numerous other influences. We tackle the specific problem of retrieving gene expression experiments by using a product partition model (Jordan et al., 2007) to cluster together genes that show similar expression patterns across a number of samples. By integrating out expression levels of the gene sets (i.e., clusterspecific information), only the coexpression patterns revealed by the clustering structure are retained. The clustering induced by the query dataset is then finally compared with the clusterings associated with the experiments in the database using the normalized information distance (Vinh et al., 2010). Notice that this approach does not involve any “training stage”, compared to that of Seth et al. (2014), and the retrieval step does not involve solving an optimization problem, compared to Faisal et al. (2014).
While gene clustering has a long history in characterizing gene expression datasets (Eisen et al., 1999; D’haeseleer, 2005), it appears not to have been used in the context of experiment retrieval before. The use of gene clustering provides a straightforward way of characterizing each experiment with minimal preprocessing of the data while capturing central coexpression patterns. Essentially all previous approaches for retrieving gene expression data have converted the data to differential expression (or gene set enrichments) requiring fixed and known casecontrol distinctions. In contrast, we have only applied standard quality control and RMA normalization steps carried out inhouse at the European Bioinformatics Institute (EBI) for datasets in the Expression Atlas database (see Petryszak et al., 2014). Our experimental evaluation further suggests that, for the current application, inference of the full probabilistic model can be approximated by some computationally faster heuristic clustering algorithm, such as means (see Appendix A). The computational simplicity makes the method highly scalable and easy to apply in a blackbox manner, as a generalpurpose retrieval scheme.
2 Approach
Let denote a data matrix from some experiment of interest, and let be a database of datasets from previously conducted experiments. The aim is to retrieve datasets from among the with similar characteristics as the query dataset . Due to the complex nature of the data, there is no single sensible or obvious way of comparing datasets (matrices of possibly different sizes). We propose using a model to characterize each dataset, with the aim of reducing noise and making relevant aspects of the data more tangible, while making the experiments comparable. The retrieval task then consists in ranking the models , inferred from , with respect to their similarity with the query model inferred from . Note that in a broad sense, the commonly used differential expression can be considered as one model type, and clustering as another.
To elaborate on the above idea further, we will now assume that the data generating mechanism of each dataset can be represented in terms of a probabilistic model with density in some family . Often, the parameter can be decomposed as , where is the parameter or characteristic of interest (e.g., gene clusters) and is a nuisance parameter (e.g., average expression level of the gene cluster). Marginalizing out (integrating the density over) then yields a model family completely determined by . Making this operation explicit, the key quantity used in inferring a representative model for a dataset is the marginal likelihood,
(1) 
where is a prior density on . Ideally, we would then proceed with a fully Bayesian approach to infer a posterior density (or distribution) over , and use to characterize . However, for computational reasons we will here choose only a single element of to represent . Under zeroone loss, the optimal choice is then the maximum a posteriori (MAP) solution
(2) 
where is a prior over . Accordingly, we now define the representative model for as .
If a suitable function can be defined for the pairwise relations between the elements of the model space , a natural ranking among will be induced by evaluating for all . For coherence of the ranking scheme, we will make a further assumption that is a metric. That is, for all , we require that
With the above conditions satisfied, the function conforms to the intuition of a distance, and furthermore, provides a solid foundation for the design of data structures and algorithms, as the model space
forms a metric space. We finally note that metrics are also available for probability distributions, making the described framework applicable in cases where computational resources allow for representing the elements of
as full posterior distributions.3 Methods
3.1 Probabilistic model for gene clustering
The first task in constructing a retrieval scheme is to choose an appropriate model for the experiments. While several different approaches, with varying aims and assumptions, exist for modelling gene expression data, a particularly simple and frequently used approach is that of gene clustering (e.g. D’haeseleer, 2005), which seeks to cluster together genes that show similar expression patterns across a number of samples. Here, we use a probabilistic clustering approach which simultaneously infers both the number of clusters as well as the optimal clustering structure.
Consider first a gene expression data matrix of dimension , where is the number of genes and is the number of samples. A clustering is a partition of the set into nonempty and nonoverlapping subsets, or clusters, such that and , for . We focus here on a probabilistic formulation of clustering, which makes explicit use of partition structures, namely the product partition model (PPM). Technically, PPM assumes that items in the same cluster are exchangeable and items in different clusters are independent (see Jordan et al., 2007). Using the terminology of Section 2, the parameter of interest for this model is the partition structure
, while the nuisance parameter is a vector of clusterspecific model parameters,
. This leads to a marginal likelihood of the form (see Equation (1))(3) 
where denotes the subset of which is indexed by . Note that the assumption of independence between clusters entails constructing the marginal likelihood as a product of clusterspecific components.
The prior distribution for will likewise be constructed as a product,
(4) 
where ensures normalization to 1 over the model space and for all subsets . Note that (4
) actually specifies the joint distribution for
and , but since the latter is implied by the former, we omit from the notation. It can be shown that a PPM with and chosen such that(5) 
where is the number of observations in cluster and controls the tendency to form new clusters, can be obtained by integrating out the model parameters in a Dirichlet process mixture model (Dahl, 2009).
The clusterspecific marginal likelihoods in Equation (3) can in principle take any suitable form. Here, we assume that for , , , the observations in each sample are independently generated from with a conjugate prior on the unknown model parameters. Furthermore, we make the simplistic assumption that the samples themselves are independent, conditional on a cluster assignment (see Hand and Yu, 2001, for a discussion about the implications of this assumption in a classification context). The resulting clusterspecific marginal likelihoods may then be written as
(6) 
where
Blomstedt et al. (2015) introduced a PPM for clustering mixed discrete and continuous data, where the continuous component was of form (6). Following their implementation, we normalize each column of the data matrix
to have zero mean and unit variance, and set the hyperparameter values to
and . Furthermore, the model is equipped with a prior of the form (5), with . Finally, combining Equations (3)–(6), an optimal clustering w.r.t. a dataset is given by the MAP solution (see Equation (2))(7) 
3.1.1 Inference
To find the optimal clustering as defined in Equation (7), we use a stochastic greedy search algorithm, which moves in the model space by successive application of move, split and merge operators; for further details, see Blomstedt et al. (2015)
. While being more efficient for the optimization task than standard Markov chain Monte Carlo methods, for large amounts of data the algorithm still requires a considerable amount of computation time. To that end, some computational simplifications based on heuristic clustering procedures will be discussed in Appendix
A.3.2 Distance metric for clusterings
Assuming now that each of the experiments in a database has been represented with a clustering , the remaining task is to find a function which can be defined on and satisfies conditions (M1)–(M4) above. In recent years, a new generation of informationtheoretic distance measures has emerged (see e.g. Meilă, 2007; Vinh et al., 2010), which possess many desirable properties, such as the metric property, and which have been employed because of their strong mathematical foundation and ability to detect nonlinear similarities.
Vinh et al. (2010) conducted a systematic comparison of informationtheoretic distance measures, concluding that the preferred “generalpurpose” measure for comparing clusterings is the normalized information distance, later denoted . To give a definition of this measure, we first introduce some notation. Briefly, for two clusterings and , the number of items cooccurring in clusters and is given by , with . The marginal sums are denoted by and . A key realization in the derivation of informationtheoretic distance measures is that each clustering induces an empirical probability distribution over the set , such that the probability of a randomly chosen item being in cluster is given by . Similarly, the joint probability of the pair cooccurring in clusters and is given by . The entropy of a clustering , describing the uncertainty associated with assigning items into the clusters of , is then formulated as
The mutual information of clusterings and , which measures how much having knowledge of reduces (or vice versa), is further defined as
It can also be interpreted as a measure of dependence in the sense that if and are independent, then . Finally, from the above quantities we obtain as
(8) 
4 Results
4.1 Data and experimental setup
To evaluate the modellingbased retrieval scheme developed in Sections 2 and 3, we used as a starting point all differential expression experiments conducted on the AAFFY44 affymetrix genechip available in Expression Atlas (EA; http://www.ebi.ac.uk/gxa, see Petryszak et al., 2014) as of 4Jun2014. Only experiments with both measurement data and analytics data available were considered. Furthermore, experiments with a very small number of genes were discarded. Since most experiments had expression measurements for more than genes, this number was set as the lower limit. Based on the above selection process we obtained an initial set of 447 experiments. In a second stage, we selected a subset of these experiments based on the availability of experimental factor ontologies (EFO; http://www.ebi.ac.uk/efo/, see Malone et al., 2010), which were used as ground truth in the evaluation. More specifically, we retained those experiments which had at least one of the EFO types “cell type”, “disease” or “organism part” present. Moreover, experiments having multiple values for a given EFO type were excluded, and finally only experiments with the same EFO value present in at least two experiments were included in this study, resulting in a final set of 251 experiments (for a list of accession numbers, see Appendix C). The number of samples per experiment varied between 6 and 353, the median number of samples being 22.
Out of the final set of 251 experiments, three partly ovelapping subsets corresponding to each of the EFO types were formed. These consisted of 103 experiments with values recorded for “cell type”, 76 with values for “disease” and 174 with values for “organism part”. The number of different EFO values in these sets of experiments were 23, 19 and 32, respectively. In retrieving full experiments, those experiments having the same EFO value were considered relevant, and other experiments irrelevant. Note that the above EFO types were not the main conditions of interest on which differential gene expression had been studied in the experiments, but were chosen to give a more general description of the experiments. A more complete ground truth was not readily available as most other EFO types were only present in small subsets of the experiments. Retrival performance was measured using precision and recall, taken as an average of successively using each of the experiments as a query to retrieve among the remaining experiments.
In order to reduce the number of genes for clustering, we initially selected for each of the 251 experiments the top 5 genes resulting from a ‘nonspecific’ search in EA, in which genes with the highest absolute values of statistics in any available contrast come first, irrespective of whether they are reported with high statistics in the remaining contrasts (for further details about listing genes in EA, see Petryszak et al., 2014). Finally, by taking the union of these genes over all experiments, we arrived at genes per experiment. The selection process per se is not an essential part of our approach but done for computational convenience only. In a preliminary stage of our analyses, we experimented with different numbers of genes but found that this only had a minor impact on the results, see Appendix B for further details.
4.2 Comparison of retrieval schemes
We will now proceed to evaluating the performance of the retrieval approach proposed in Section 2. For gene expression data, we learn for each experiment a Gaussian product partition model (PPM) which implies a clustering over genes, see Section 3. The clustering learned from the query data is then related to the clusterings by evaluating the distances , , see Equation (8). This approach will be contrasted with two alternative approaches for contentbased retrieval previously suggested in the literature. The first one of these is closely related to the proposed approach in that it learns a PPM for each experiment in the database. However, instead of evaluating distances, it evaluates the marginal likelihoods of the learnt models, given the query dataset. A higher likelihood is then an indication of a higher relevance to the query dataset. A similar approach, albeit for a different model family, was recently suggested in Seth et al. (2014). The term “modellingbased retrieval” has previously been used by Faisal et al. (2014) to describe an approach based on probabilistic modelling but using a likelihood as the measure of relevance. To make a distinction between the approach proposed here and approaches based on evaluating likelihoods, we will in this comparison refer to the former as modeldistancebased retrieval and the latter as likelihoodbased retrieval. See Section 5 for a further discussion about the differences between the two approaches.
The second alternative approach, differential expression based retrieval, assumes that a statistical test to detect differentially expressed genes has been conducted beforehand. The method is then based on correlating the genespecific differential expression values of the query experiment with those of the database experiments. An approach similar to this was suggested by Engreitz et al. (2010). If targeted at differential expression profiles obtained under specific conditions known to be important, this scheme has much potential to achieve good retrieval performance. On the other hand, it assumes more background knowledge and preprocessing of the data than the suggested retrieval schemes based on gene clustering. Here, we do not assume a specific condition of interest but choose in each experiment for the selected 1125 genes the smallest values under any of the conditions tested and reported in Expression Atlas. We also experimented with a much larger set of genes, constituting the maximal common set of genes tested in all experiments, but this resulted in slightly inferior performance. The correlation measure used was Pearson’s correlation. We finally note that differential expression based retrieval schemes can also be formulated under the general framework of Section 2 using some appropriate probabilistic model for differential expression, as formulated in e.g. Do et al. (2006).
The results of the comparison between the retrieval schemes are shown in Figure 1. Here, the modeldistancebased retrieval scheme clearly outperforms the two other schemes. A notable feature of the results is the surprisingly poor performance of the likelihoodbased approach. This may be due to the wellknown fact that gene expression measurements tend to be extremely noisy. In essence, the marginal likelihood measures how well the query dataset is predicted by a model , learnt from dataset . Even if experiments and are in some way related, the idealized model may still not provide a good prediction for data . Therefore, instead of using the complex and possibly very noisy dataset as query input, retaining only the characteristics relevant for retrieval in both and may help to improve performance, as illustrated in the results.
4.3 Biological information in gene clustering
Any single EFO type will necessarily capture only one aspect of an experiment, whereas a meaningful retrieval task usually involves an evaluation of relevance between experiments in terms of a combination of aspects. It is therefore of interest to study the effect of composing the ground truth as a combination of multiple EFO types. In the current experimental setup, the ground truth for each of the EFO types “cell type”, “disease” and “organism part”, can be represented as a symmetric binary matrix of dimension , such that entry experiments and are mutually relevant. A ground truth which requires a match in EFO types can then be formed by summing the three matrices and requiring .
In Figure 2, the modeldistancebased retrieval scheme is evaluated against ground truth relevances requiring (a) any EFO type to match () (b) two or more matches () and (c) all EFO types to match (). The number of experiments satisfying these conditions are 251, 54 and 6, respectively. Intuitively, the ground truth can be considered increasingly informative as the number of matching EFO types required to declare relevance increases. A retrieval scheme capturing biologically relevant information should then be in better agreement with a more informative ground truth. Although the curves of Figures 1(a) and 1(b) are not directly comparable due to the differing number of experiments used, the shape of the latter gives an indication of a better agreement. In Figure 1(c), owing to the small number of available experiments, the ground truth is compared with the single most relevant experiment (out of five possible ones) retrieved for each query. Here, the retrieval result matches the ground truth in four of the six queries.
4.4 Annotations and gene clustering combined
As noted previously in Section 1, information about experiments may often be missing, insufficient or suffer from variations in terminology (Baumgartner et al., 2007; Schmidberger et al., 2011) despite a formal declaration of compliance with MIAME criteria (Brazma, 2001). Hence, even in cases where keywordbased retrieval is of primary interest, it may be advantageous to complement a query with information provided by gene clustering. A straightforward way of combining these two types of information is the following. Assume that a database of experiments is being queried and that experiments are found to match the keyword query. More formally, the result can be encoded as a binary vector of length with elements having value 1. A modeldistancebased retrieval scheme, on the other hand will return a vector of length with each element representing the distance of the corresponding experimentspecific model to the query model. Elementwise multiplication of these vectors then effectively induces a ranking of the experiments retrieved in the keywordbased query. The underlying idea is that this ranking will reflect some information which is not present in the queried keyword(s) alone.
To test the combined method, we considered all experiments matching in both “cell type” and “organism part”, resulting in a total of 43 experiments (all other combinations of two EFO types resulted in significantly less experiments). A match in both of these EFO types was used as ground truth. The idea was then to retrieve experiments assuming only one of the EFO types to be known, complementing keywordbased retrieval with rankings from modeldistancebased retrieval. Retrieving experiments assuming only “cell type” to be known resulted in an average precision of 0.55 for keywordbased retrieval and 0.61 (mean average precision) for combined retrieval, the corresponding numbers being 0.81 and 0.84, respectively, when only “organism part” was assumed to be known. In both cases we were able to see a slight improvement in performance for the combined approach, suggesting that keywordbased retrieval may benefit from being complemented with auxiliary information, such as gene clustering.
5 Discussion
In this paper, we have introduced a general probabilistic framework for contentdriven retrieval of experimental datasets. Compared to earlier works which also employ probabilistic modelling (e.g. Caldas et al., 2009, 2012; Faisal et al., 2014; Seth et al., 2014), we do not use the likelihood of the query data as a measure of relevance, but instead learn a model of the query data and compare models. We argue that this reduces noise in the query input. With nuisance parameters further marginalized out, only characteristics relevant for the retrieval task are retained. A special instance of the general framework introduced in this paper has been previously used as a comparative method in a simulation study (Seth et al., 2014) with performance slightly inferior to a likelihoodbased approach. The simulation setting in that earlier study was, however, very simplistic compared to datasets encountered in many reallife scenarios, such as that of Section 4, where the modeldistancebased approach was now seen to clearly outperform its likelihoodbased counterpart.
Contrary to likelihoodbased approaches, the modeldistancebased approach requires all models under consideration to belong to the same family. Although this may seem somewhat restrictive, in particular for the potential future scenario in which individual researchers independently store models in a repository along with their datasets (e.g. Faisal et al., 2014), there are also scenarios where the assumption is feasible. Datasets which arise as a result of some specific type of experiment are often in practice modelled using a fairly standardized set of approaches. In particular, if the models are constructed automatically, or by a curator of a data repository, the assumption of the models belonging to the same family is feasible.
As a specific application of the general framework, in Sections 3 and 4 we proposed a retrieval scheme for gene expression experiments based on gene clustering. It turned out that clustering is even a surprisingly good model for this purpose; with minimal preprocessing and prior knowledge about the experiments, it is able to yield reasonable retrieval performance (Section 4.2) and to capture biologically relevant characteristics about the experiments (Section 4.3). Finally, we showed that it is straightforward to combine modeldistancebased (or any modellingbased) retrieval with retrieval using available keywords (Section 4.4).
Acknowledgement
The authors would like to thank Ugis Sarkans for providing useful information about Expression Atlas. This work was financially supported by the Academy of Finland (Finnish Centre of Excellence in Computational Inference Research COIN, grant no 251170).
References
 Baumgartner et al. (2007) Baumgartner, Jr., W. A., Cohen, K. B., Fox, L. M., AcquaahMensah, G., and Hunter, L. (2007). Manual curation is not sufficient for annotation of genomic databases. Bioinformatics, 23, i41–i48.
 Blomstedt et al. (2015) Blomstedt, P., Tang, J., Xiong, J., Granlund, C., and Corander, J. (2015). A Bayesian predictive model for clustering data of mixed discrete and continuous type. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37(3), 489–498.
 Brazma (2001) Brazma, A. (2001). Minimum information about a microarray experiment (MIAME) – towards standards for microarray data. Nature Genetics, 29, 365–71.
 Caldas et al. (2009) Caldas, J., Gehlenborg, N., Faisal, A., Brazma, A., and Kaski, S. (2009). Probabilistic retrieval and visualization of biologically relevant microarray experiments. Bioinformatics, 25(12), i145–i153.
 Caldas et al. (2012) Caldas, J., Gehlenborg, Kettunen, E., Faisal, A., Rönty, M., Nicholson, A. G., Knuutila, S., Brazma, A., and Kaski, S. (2012). Datadriven information retrieval in heterogeneous collections of transcriptomics data links SIM2s to malignant pleural mesothelioma. Bioinformatics, 28(2), 246–253.
 Dahl (2009) Dahl, D. B. (2009). Modal clustering in a class of product partition models. Bayesian Analysis, 4(2), 243–264.
 D’haeseleer (2005) D’haeseleer, P. (2005). How does gene expression clustering work? Nature Biotechnology, 23(12), 1499–1501.
 Do et al. (2006) Do, K.A., Müller, P., and Vannucci, M., editors (2006). Bayesian Inference for Gene Expression and Proteomics. Cambridge University Press, Cambridge, UK.
 Eisen et al. (1999) Eisen, M. B., Spellman, P. T., Brown, P. O., and Botstein, D. (1999). Cluster analysis and display of genomewide expression patterns. PNAS, 95, 14863–14868.
 Engreitz et al. (2010) Engreitz, J. M., Morgan, A. A., Dudley, J. T., Chen, R., Thathoo, R., Altman, R. B., and Butte, A. J. (2010). Contentbased microarray search using differential expression profiles. BMC Bioinformatics, 11(603).
 Faisal et al. (2014) Faisal, A., Peltonen, J., Georgii, E., Rung, J., and Kaski, S. (2014). Toward computational cumulative biology by combining models of biological datasets. PLoS ONE, 9(11), e113053.
 Fujibuchi et al. (2007) Fujibuchi, W., Kiseleva, L., Taniguchi, T., Harada, H., and Horton, P. (2007). Cellmontage: similar expression profile search server. Bioinformatics, 23(22), 3103–3104.
 Georgii et al. (2012) Georgii, E., Salojärvi, J., Brosché, M., Kangasjärvi, J., and Kaski, S. (2012). Targeted retrieval of gene expression measurements using regulatory models. Bioinformatics, 28(18), 2349–2356.
 Hafemeister et al. (2011) Hafemeister, C., Costa, I. G., Schonhuth, A., and Schliep, A. (2011). Classifying short gene expression timecourses with Bayesian estimation of piecewise constant functions. Bioinformatics, 27(7), 946–952.
 Hand and Yu (2001) Hand, D. J. and Yu, K. (2001). Idiot’s Bayes – Not so stupid after all? International Statistical Review, 69(3), 385–398.
 Hunter et al. (2001) Hunter, L., Taylor, R. C., Leach, S. M., and Simon, R. (2001). GEST: a gene expression search tool based on a novel Bayesian similarity metric. Bioinformatics, 17(Suppl 1), S115–S122.
 Jaskowiak et al. (2014) Jaskowiak, P. A., Campello, R. J. G. B., and Costa, I. G. (2014). On the selection of appropriate distances for gene expression data clustering. BMC Bioinformatics, 15(Suppl 2), S2.
 Jordan et al. (2007) Jordan, C., Livingstone, V., and Barry, D. (2007). Statistical modelling using product partition models. Statistical Modelling, 7(3), 275–295.
 Malone et al. (2010) Malone, J., Holloway, E., Adamusiak, T., Kapushesky, M., Zheng, J., Kolesnikov, N., Zhukova, A., Brazma, A., and Parkinson, H. (2010). Modeling sample variables with an experimental factor ontology. Bioinformatics, 26(8), 1112–1118.

Meilă (2007)
Meilă, M. (2007).
Comparing clusterings–an information based distance.
Journal of Multivariate Analysis
, 98, 873–895.  Petryszak et al. (2014) Petryszak, R., Burdett, T., Fiorelli, B., Fonseca, N., GonzalezPorta, M., Hastings, E., Huber, W., Jupp, S., Keays, M., Kryvych, N., McMurry, J., Marioni, J., Malone, J., Megy, K., Rustici, G., Tang, A. Y., Taubert, J., Williams, E., Mannion, O., Parkinson, H. E., and Brazma, A. (2014). Expression Atlas update–a database of gene and transcript expression from microarray and sequencingbased functional genomics experiments. Nucleic Acids Research, 42(Database issue), D926–32.
 Schmidberger et al. (2011) Schmidberger, M., Lennert, S., and Mansmann, U. (2011). Conceptual aspects of large metaanalyses with publicly available microarray data: a case study in oncology. Bioninformatics and Biology Insights, 5, 13–39.
 Seth et al. (2014) Seth, S., ShaweTaylor, J., and Kaski, S. (2014). Retrieval of experiments by efficient comparison of marginal likelihoods. In C. Loo, K. Yap, K. Wong, A. Teoh, and K. Huang, editors, Neural Information Processing, volume 8835 of Lecture Notes in Computer Science, pages 135–142. Springer International Publishing.
 Smith et al. (2008) Smith, A. A., Vollrath, A., Bradfield, C. A., and Craven, M. (2008). Similarity queries for temporal toxicogenomic expression profiles. PLoS Comput Biol, 4(7), e1000116.

Vinh et al. (2010)
Vinh, N. X., Epps, J., and Bailey, J. (2010).
Information theoretic measures for clusterings comparison: Variants,
properties, normalization and correction for chance.
Journal of Machine Learning Research
, 11, 2837–2854.  Zhu et al. (2008) Zhu, Y., Davis, S., Stephens, R., S., M. P., and Chen, Y. (2008). GEOmetadb: powerful alternative search engine for the Gene Expression Omnibus. Bioinformatics, 24(23), 2798–2800.
Appendix
Appendix A Simplified search for an optimal clustering
Recall that a product partition model (PPM) is a probabilistic model which implies a clustering of data items into nonempty and nonoverlapping subsets. Given a dataset , an optimal clustering is given by the maximum a posteriori solution
where denotes the space of all possible clusterings of . Since the cardinality of the model space grows very quickly with
, an exhaustive evaluation of all posterior probabilities
, , is not feasible in practice (for instance, with , we already have ). Therefore a stochastic greedy search algorithm was implemented in the analyses of Section 4 to find the optimal clustering for each dataset. While being more efficient for the optimization task than standard Markov chain Monte Carlo methods, for large amounts of data the algorithm still requires a considerable amount of computation time.One possible simplification is to restrict the search to a subset of the model space by choosing a set of potentially good solutions in advance, and then selecting the optimal solution among them as
(9) 
A straightforward way of finding a suitable is to only consider solutions found by one or several different heuristic clustering algorithms. These algorithms are usually fast to execute but provide no measure of uncertainty regarding the obtained solution and require the number of clusters to be fixed in advance. Running such an algorithm for all values of will reduce the cardinality of the search space to , which in many cases is small enough to enable an exhaustive evaluation of the posterior probabilities of all clusterings in . Even a combination of, say, different algorithms still yields a model space with a cardinality of only .
To further reduce the scope of the search, the range of for which heuristic solutions are obtained may be restricted to an interval in which plausible solutions are likely to be found. For instance, in analysing how different distances and clustering methods interact regarding their ability to cluster gene expression, Jaskowiak et al. (2014) conducted a comparison for clusterings generated in the interval , rather than the full range of values for . In our current application, we additionally experimented with restricting to fixed value, which trivially reduces the model space to a single clustering. In this case, as the number of clusters is not chosen adaptively for each dataset, the clusterings no longer provide biologically meaningful groupings of the genes but may still give a sufficient characterization of the experiments for purposes of retrieval. This is demonstrated in Figure 3, where retrieval based on the optimal clustering in the full model space is compared with that in a reduced model space of means solutions in , as well as a trivial model space , consisting of only one means solution with , corresponding to the midpoint of the interval used by Jaskowiak et al. (2014).
The quality of the solution in (9) depends on how well the clusterings in (or ) correspond to those clusterings in which have a high probability under the PPM formulation. Figure 4 shows a comparison of the retrieval performance of various heuristic clustering algorithms, with the number of clusters fixed for simplicity at , and using PPM as baseline. The results indicate that heuristic algorithms which are based on a Euclidean distance measure (e.g. means with squared Euclidean distance and complete linkage (CL) with Euclidean distance) yield retrieval performance which closest matches that obtained using the Gaussian PPM. Although similar behaviour may be expected in other datasets of the same type, the conclusion is, however, dataspecific and should not be generalized beyond the scope of the current data without further validation.
Appendix B Impact of number of genes
In Section 4.1, the number of genes for clustering was reduced by initially selecting for each experiment the top 5 genes resulting from a ‘nonspecific’ search in Expression Atlas (http://www.ebi.ac.uk/gxa, see Petryszak et al., 2014). Taking the union of these genes over all 251 experiments finally resulted in 1125 genes per experiment. To study the impact of the number of genes included in each dataset, we repeated the same procedure for the top 10 and top 25 genes, resulting in 2117 and 4740 genes per experiment, respectively. Due to the large number of genes, in particular in the last group, a simplified search scheme for clusterings was employed as described in the previous section, using means with squared Euclidean distance measure and . Figure 5 suggests that the number of genes chosen only has a minor impact on retrieval performance.
Appendix C Experiment accession numbers
Accession numbers of the 251 experiments selected for the analyses:
EGEOD10070, EGEOD10233, EGEOD10289, EGEOD10311, EGEOD10315, EGEOD10595, EGEOD10696, EGEOD10718, EGEOD10780, EGEOD10799, EGEOD10820, EGEOD10821, EGEOD10831, EGEOD10879, EGEOD10890, EGEOD10896, EGEOD10916, EGEOD10971, EGEOD10979, EGEOD11057, EGEOD11166, EGEOD11199, EGEOD11281, EGEOD11309, EGEOD11324, EGEOD11348, EGEOD11352, EGEOD11428, EGEOD11755, EGEOD11761, EGEOD11783, EGEOD11839, EGEOD11886, EGEOD11919, EGEOD11941, EGEOD11959, EGEOD12034, EGEOD12108, EGEOD12113, EGEOD12121, EGEOD12172, EGEOD12251, EGEOD12254, EGEOD12264, EGEOD12265, EGEOD12287, EGEOD12355, EGEOD12408, EGEOD12452, EGEOD12710, EGEOD13487, EGEOD13501, EGEOD13548, EGEOD13637, EGEOD13762, EGEOD13763, EGEOD13837, EGEOD13899, EGEOD13911, EGEOD13975, EGEOD13987, EGEOD14001, EGEOD14017, EGEOD14278, EGEOD14383, EGEOD14390, EGEOD14479, EGEOD14924, EGEOD14926, EGEOD14973, EGEOD15271, EGEOD15389, EGEOD15543, EGEOD15645, EGEOD15811, EGEOD15947, EGEOD16020, EGEOD16214, EGEOD16237, EGEOD16363, EGEOD1643, EGEOD16515, EGEOD16728, EGEOD16797, EGEOD16836, EGEOD16837, EGEOD17251, EGEOD17385, EGEOD17400, EGEOD17636, EGEOD17743, EGEOD17763, EGEOD18018, EGEOD18791, EGEOD18842, EGEOD18913, EGEOD18995, EGEOD19067, EGEOD19293, EGEOD19639, EGEOD19665, EGEOD19784, EGEOD19804, EGEOD19826, EGEOD19864, EGEOD19982, EGEOD20114, EGEOD20540, EGEOD20948, EGEOD21261, EGEOD22152, EGEOD22229, EGEOD22513, EGEOD22544, EGEOD22563, EGEOD22779, EGEOD23031, EGEOD23687, EGEOD23806, EGEOD2397, EGEOD23984, EGEOD25518, EGEOD2634, EGEOD26495, EGEOD26673, EGEOD27034, EGEOD2706, EGEOD27187, EGEOD31193, EGEOD32719, EGEOD3467, EGEOD34748, EGEOD34880, EGEOD3526, EGEOD35972, EGEOD36547, EGEOD3678, EGEOD3744, EGEOD3998, EGEOD4567, EGEOD4600, EGEOD4655, EGEOD4883, EGEOD4888, EGEOD5040, EGEOD5230, EGEOD5264, EGEOD5372, EGEOD5679, EGEOD6054, EGEOD6241, EGEOD6400, EGEOD6764, EGEOD7011, EGEOD7216, EGEOD7224, EGEOD7392, EGEOD7440, EGEOD7509, EGEOD7515, EGEOD7538, EGEOD7568, EGEOD7586, EGEOD7696, EGEOD7708, EGEOD7869, EGEOD7890, EGEOD8023, EGEOD8121, EGEOD8167, EGEOD8514, EGEOD8527, EGEOD8597, EGEOD8658, EGEOD8823, EGEOD8961, EGEOD8977, EGEOD9171, EGEOD9489, EGEOD9517, EGEOD9599, EGEOD9649, EGEOD9692, EGEOD9894, EMEXP1103, EMEXP1171, EMEXP1230, EMEXP1243, EMEXP1290, EMEXP1337, EMEXP1372, EMEXP1389, EMEXP1403, EMEXP1412, EMEXP1425, EMEXP1482, EMEXP1512, EMEXP1599, EMEXP1601, EMEXP1741, EMEXP1838, EMEXP1857, EMEXP1956, EMEXP1958, EMEXP2010, EMEXP2034, EMEXP2055, EMEXP2069, EMEXP2083, EMEXP2115, EMEXP2128, EMEXP2236, EMEXP2340, EMEXP2351, EMEXP2360, EMEXP2375, EMEXP2590, EMEXP2657, EMEXP3479, EMEXP3577, EMEXP3756, EMEXP3810, EMEXP555, EMEXP561, EMEXP563, EMEXP858, EMEXP884, EMEXP930, EMEXP935, EMEXP964, EMEXP980, EMEXP987, EMEXP993, EMTAB1131, EMTAB317, EMTAB372, EMTAB874, ETABM1020, ETABM1029, ETABM1138, ETABM1208, ETABM234, ETABM276, ETABM282, ETABM311, ETABM440, ETABM577, ETABM601, ETABM666, ETABM763, ETABM898
Comments
There are no comments yet.