Toward a generic representation of random variables for machine learning

06/02/2015 ∙ by Gautier Marti, et al. ∙ Hellebore Capital Ltd 0

This paper presents a pre-processing and a distance which improve the performance of machine learning algorithms working on independent and identically distributed stochastic processes. We introduce a novel non-parametric approach to represent random variables which splits apart dependency and distribution without losing any information. We also propound an associated metric leveraging this representation and its statistical estimate. Besides experiments on synthetic datasets, the benefits of our contribution is illustrated through the example of clustering financial time series, for instance prices from the credit default swaps market. Results are available on the website www.datagrapple.com and an IPython Notebook tutorial is available at www.datagrapple.com/Tech for reproducible research.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 8

page 9

page 11

page 12

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

Machine learning on time series is a booming field and as such plenty of representations, transformations, normalizations, metrics and other divergences are thrown at disposal to the practitioner. A further consequence of the recent advances in time series mining is that it is difficult to have a sober look at the state of the art since many papers state contradictory claims as described in (Ding et al., 2008). To be fair, we should mention that when data, pre-processing steps, distances and algorithms are combined together, they have an intricate behaviour making it difficult to draw unanimous conclusions especially in a fast-paced environment. Restricting the scope of time series to independent and identically distributed (i.i.d.) stochastic processes, we propound a method which, on the contrary to many of its counterparts, is mathematically grounded with respect to the clustering task defined in subsection 1.1. The representation we present in Section 2 exploits a property similar to the seminal result of copula theory, namely Sklar’s theorem (Sklar, 1959). This approach leverages the specificities of random variables and this way solves several shortcomings of more classical data pre-processing and distances that will be detailed in subsection 1.2. Section 3 is dedicated to experiments on synthetic and real datasets to illustrate the benefits of our method which relies on the hypothesis of i.i.d. sampling of the random variables. Synthetic time series are generated by a simple model yielding correlated random variables following different distributions. The presented approach is also applied to financial time series from the credit default swaps market whose prices dynamics are usually modelled by random walks according to the efficient-market hypothesis (Fama, 1965). This dataset seems more interesting than stocks as credit default swaps are often considered as a gauge of investors’ fear, thus time series are subject to more violent moves and may provide more distributional information than the ones from the stock market. We have made our detailed experiments (cf. Machine Tree on the website www.datagrapple.com) and Python code available (www.datagrapple.com/Tech) for reproducible research. Finally, we conclude the paper with a discussion on the method and we propound future research directions.

1.1 Motivation and goal of study

Machine learning methodology usually consists in several pre-processing steps aiming at cleaning data and preparing them for being fed to a battery of algorithms. Data scientists have the daunting mission to choose the best possible combination of pre-processing, dissimilarity measure and algorithm to solve the task at hand among a profuse literature. In this article, we provide both a pre-processing and a distance for studying i.i.d. random processes which are compatible with basic machine learning algorithms.

Many statistical distances exist to measure the dissimilarity of two random variables, and therefore two i.i.d. random processes. Such distances can be roughly classified in two families:

  1. distributional distances, for instance (Ryabko, 2010), (Khaleghi et al., 2012) and (Henderson et al., 2015)

    , which focus on dissimilarity between probability distributions and quantify divergences in marginal behaviours,

  2. dependence distances, such as the distance correlation or copula-based kernel dependency measures (Póczos et al., 2012), which focus on the joint behaviours of random variables, generally ignoring their distribution properties.

However, we may want to be able to discriminate random variables both on distribution and dependence. This can be motivated, for instance, from the study of financial assets returns: are two perfectly correlated random variables (assets returns), but one being normally distributed and the other one following a heavy-tailed distribution, similar? From risk perspective, the answer is no

(Kelly and Jiang, 2014), hence the propounded distance of this article. We illustrate its benefits through clustering, a machine learning task which primarily relies on the metric space considered (data representation and associated distance). Besides clustering has found application in finance, e.g. (Tola et al., 2008), which gives us a framework for benchmarking on real data.

Our objective is therefore to obtain a good clustering of random variables based on an appropriate and simple enough distance for being used with basic clustering algorithms, e.g. Ward hierarchical clustering

(Ward, 1963), -means++ (Arthur and Vassilvitskii, 2007), affinity propagation (Frey and Dueck, 2007).

By clustering we mean the task of grouping sets of objects in such a way that objects in the same cluster are more similar to each other than those in different clusters. More specifically, a cluster of random variables should gather random variables with common dependence between them and with a common distribution. Two clusters should differ either in the dependency between their random variables or in their distributions.

A good clustering is a partition of the data that must be stable to small perturbations of the dataset. “Stability of some kind is clearly a desirable property of clustering methods” (Carlsson and Mémoli, 2010). In the case of random variables, these small perturbations can be obtained from resampling (Levine and Domany, 2001), (Monti et al., 2003), (Lange et al., 2004) in the spirit of the bootstrap method since it preserves the statistical properties of the initial sample (Efron, 1979).

Yet, practitioners and researchers pinpoint that state-of-the-art results of clustering methodology applied to financial times series are very sensitive to perturbations (Lemieux et al., 2014). The observed unstability may result from a poor representation of these time series, and thus clusters may not capture all the underlying information.

1.2 Shortcomings of a standard machine learning approach

A naive but often used distance between random variables to measure similarity and to perform clustering is the distance . Yet, this distance is not suited to our task.

Example 1 (Distance between two Gaussians).

Let

be a bivariate Gaussian vector, with

, and whose correlation is . We obtain . Now, consider the following values for correlation:

  • , so . The two variables are independent (since uncorrelated and jointly normally distributed), thus we must discriminate on distribution information. Assume and . For , we obtain instead of the distance , expected from comparing two equal Gaussians.

  • , so . Since the variables are perfectly correlated, we must discriminate on distributions. We actually compare them with a metric on the mean standard deviation half-plane. However, this is not an appropriate geometry for comparing two Gaussians (Costa et al., 2014). For instance, if , we find for any values of . As grows, probability attached by the two Gaussians to a given interval grows similar (cf. Fig. 1), yet this increasing similarity is not taken into account by this distance.


    Figure 1: Probability density functions of Gaussians and (in green), Gaussians and (in red), and Gaussians and (in blue). Green, red and blue Gaussians are equidistant using geometry on the parameter space .

considers both dependence and distribution information of the random variables, but not in a relevant way with respect to our task. Yet, we will benchmark against this distance since other more sophisticated distances on time series such as dynamic time warping (Berndt and Clifford, 1994) and representations such as wavelets (Percival and Walden, 2006) or SAX (Lin et al., 2003) were explicitly designed to handle temporal patterns which are inexistant in i.i.d. random processes.

2 A generic representation for random variables

Our purpose is to introduce a new data representation and a suitable distance which takes into account both distributional proximities and joint behaviours.

2.1 A representation preserving total information

Let be a probability space. is the sample space, is the -algebra of events, and is the probability measure. Let be the space of all continuous real-valued random variables defined on . Let

be the space of random variables following a uniform distribution on

and

be the space of absolutely continuous cumulative distribution functions (cdf).

Definition 1 (The copula transform).

Let be a random vector with cdfs . The random vector is known as the copula transform.

Property 1 (Uniform margins of the copula transform).

, , are uniformly distributed on .

Proof.

. ∎

We define the following representation of random vectors that actually splits the joint behaviours of the marginal variables from their distributional information.

Definition 2 (dependence distribution space projection).

Let be a mapping which transforms into its generic representation, an element of representing , defined as follow

(1)
Property 2.

is a bijection.

Proof.

is surjective as any element has the fiber . is injective as a.s. in implies that they have the same cdf and since a.s., it follows that a.s.

This result replicates the seminal result of copula theory, namely Sklar’s theorem (Sklar, 1959), which asserts one can split the dependency and distribution apart without losing any information. Fig. 2 illustrates this projection for .


Figure 2: ArcelorMittal and Société générale prices ( observations from ) are projected on dependence distribution space; encode the dependence between and (a perfect correlation would be represented by a sharp diagonal on the scatterplot);

are the margins (their log-densities are displayed above), notice their heavy-tailed exponential distribution (especially for ArcelorMittal).

2.2 A distance between random variables

We leverage the propounded representation to build a suitable yet simple distance between random variables which is invariant under diffeomorphism.

Definition 3 (Distance between two random variables).

Let . Let . Let , where and are respectively and marginal cdfs. We define the following distance

(2)

where

(3)

and

(4)

In particular, is the Hellinger distance related to the Bhattacharyya (1/2-Chernoff) coefficient upper bounding the Bayes’ classification error. To quantify distribution dissimilarity, is used rather than the more general -Chernoff divergences since it satisfies the properties 3, 4, 5 (significant for practitioners). In addition, can thus be efficiently implemented as a scalar product. is a distance correlation measuring statistical dependence between two random variables, where is the Spearman’s correlation between and . Notice that can be expressed by using the copula implicitly defined by the relation since (Fredricks and Nelsen, 2007).

Example 2 (Distance between two Gaussians).

Let be a bivariate Gaussian vector, with , and . We obtain,

Remember that for perfectly correlated Gaussians (), we want to discriminate on their distributions. We can observe that

  • for , then , it alleviates a main shortcoming of the basic distance which is diverging to in this case;

  • if , for , then , its maximum value, i.e. it means that two Gaussians cannot be more remote from each other than two different Dirac delta functions.

We will refer to the use of this distance as the generic parametric representation (GPR) approach. GPR distance is a fast and good proxy for distance

when the first two moments

and predominate. Nonetheless, for datasets which contain heavy-tailed distributions, GPR fails to capture this information.

Property 3.

Let . The distance verifies .

Proof.

Let . We have

  1. , property of the Hellinger distance;

  2. , since .

Finally, by convex combination, . ∎

Property 4.

For , is a metric.

Proof.

Let . For , is a metric, and the only non-trivial property to verify is the separation axiom

  1. a.s.
    a.s. , and thus ,

  2. a.s.
    and a.s. and . Since is absolutely continuous, it follows a.s.

Notice that for , this property does not hold. Let , . but . Let . but . ∎

Property 5.

Diffeomorphism invariance. Let be a diffeomorphism. Let . Distance is invariant under diffeomorphism, i.e.

(5)
Proof.

From definition, we have

(6)

and since

(7)

we obtain

(8)

In addition, , we have

(9)

which implies that

(10)

Finally, we obtain Property 5 by definition of . ∎

Thus, is invariant under monotonic transformations, a desirable property as it ensures to be insensitive to scaling (e.g. choice of units) or measurement scheme (e.g. device, mathematical modelling) of the underlying phenomenon.

2.3 A non-parametric statistical estimation of

To apply the propounded distance on sampled data without parametric assumptions, we have to define its statistical estimate working on realizations of the i.i.d. random variables. Distance working with continuous uniform distributions can be approximated by normalized rank statistics yielding to discrete uniform distributions, in fact coordinates of the multivariate empirical copula (Deheuvels, 1979) which is a non-parametric estimate converging uniformly toward the underlying copula (Deheuvels, 1981). Distance working with densities can be approximated by using its discrete form working on histogram density estimates.

Definition 4 (The empirical copula transform).

Let , , be observations from a random vector with continuous margins . Since one cannot directly obtain the corresponding copula observations without knowing a priori , one can instead estimate the empirical margins to obtain empirical observations which are thus related to normalized rank statistics as , where denotes the rank of observation .

Definition 5 (Empirical distance).

Let and be realizations of real-valued random variables respectively. An empirical distance between realizations of random variables can be defined by

(11)

where

(12)

and

(13)

being here a suitable bandwidth, and being a density histogram estimating pdf from , realizations of random variable .

We will refer henceforth to this distance and its use as the generic non-parametric representation (GNPR) approach. To use effectively and its statistical estimate, it boils down to select a particular value for . We suggest here an exploratory approach where one can test (i) distribution information (), (ii) dependence information (), and (iii) a mix of both information (). Ideally, should reflect the balance of dependence and distribution information in the data. In a supervised setting, one could select an estimate of the right balance

optimizing some loss function by techniques such as cross-validation. Yet, the lack of a clear loss function makes the estimation of

difficult in an unsupervised setting. For clustering, many authors (Lange et al., 2004), (Shamir et al., 2007), (Shamir et al., 2008), (Meinshausen and Bühlmann, 2010) suggest stability as a tool for parameter selection. But, (Ben-David et al., 2006) warn against its irrelevant use for this purpose. Besides, we already use stability for clustering validation and we want to avoid overfitting. Finally, we think that finding an optimal trade-off is important for accelerating the rate of convergence toward the underlying ground truth when working with finite and possibly small samples, but ultimately lose its importance asymptotically as soon as .

3 Experiments and applications

3.1 Synthetic datasets

We propose the following model for testing the efficiency of the GNPR approach: time series of length which are subdivided into correlation clusters themselves subdivided into distribution clusters.

Let , be i.i.d. random variables. Let . Let . Let , , be independent random variables. For , we define

(14)

where

  1. , if (mod ), otherwise;

  2. ,

  3. , if , otherwise.

are partitioned into clusters of random variables each. Playing with the model parameters, we define in Table 1 some interesting test case datasets to study distribution clustering, dependence clustering or a mix of both. We use the following notations as a shorthand: and -distribution. Since and have both a mean of

and a variance of

, GPR cannot find any difference between them, but GNPR can discriminate on higher moments as it can be seen in Fig. 3.

GPR GPR GPR GNPR GNPR GNPR
Figure 3: GPR and GNPR distance matrices. Both GPR and GNPR highlight the 5 correlation clusters (), but only GNPR finds the 2 distributions ( and ) subdividing them (). Finally, by combining both information GNPR () can highlight the 10 original clusters, while GPR () simply adds noise on the correlation distance matrix it recovers.
Clustering Dataset
Distribution A 200 5000 4 1 0
Dependence B 200 5000 10 10 0.1
Mix C 200 5000 10 5 0.1
G 32 8 0.1
Table 1: Model parameters for some interesting test case datasets

3.2 Performance of clustering using GNPR

We empirically show that the GNPR approach achieves better results than others using common distances regardless of the algorithm used on the defined test cases A, B and C described in Table 1. Test case A illustrates datasets containing only distribution information: there are 4 clusters of distributions. Test case B illustrates datasets containing only dependence information: there are 10 clusters of correlation between random variables which are heavy-tailed. Test case C illustrates datasets containing both information: it consists in 10 clusters composed of 5 correlation clusters and each of them is divided into 2 distribution clusters. Using scikit-learn implementation (Pedregosa et al., 2011), we apply clustering algorithms with different paradigms: a hierarchical clustering using average linkage (HC-AL), -means++ (KM++), and affinity propagation (AP). Experiment results are reported in Table 2. GNPR performance is due to its proper representation (cf. Fig. 4). Finally, we have noticed increasing precision of clustering using GNPR as time grows to infinity, all other parameters being fixed. The number of time series seems rather uninformative as illustrated in Fig. 5 (left) which plots ARI (Hubert and Arabie, 1985) between computed clustering and ground-truth of dataset G as an heatmap for varying and . Fig. 5 (right) shows the convergence to the true clustering as a function of .

GPR GNPR
Figure 4: Distance matrices obtained on dataset C using distance correlation, distance, GPR and GNPR. None but GNPR highlights the 10 original clusters which appear on its diagonal.
Adjusted Rand Index
Algo. Distance A B C
HC-AL 0.00 0.99 0.56
0.00 0.09 0.55
GPR 0.34 0.01 0.06
GPR 0.00 0.99 0.56
GPR 0.34 0.59 0.57
GNPR 1 0.00 0.17
GNPR 0.00 1 0.57
GNPR 0.99 0.25 0.95
KM++ 0.00 0.60 0.46
0.00 0.34 0.48
GPR 0.41 0.01 0.06
GPR 0.00 0.45 0.43
GPR 0.27 0.51 0.48
GNPR 0.96 0.00 0.14
GNPR 0.00 0.65 0.53
GNPR 0.72 0.21 0.64
AP 0.00 0.99 0.48
0.14 0.94 0.59
GPR 0.25 0.01 0.05
GPR 0.00 0.99 0.48
GPR 0.06 0.80 0.52
GNPR 1 0.00 0.18
GNPR 0.00 1 0.59
GNPR 0.39 0.39 1
Table 2: Comparison of distance correlation, distance, GPR and GNPR: GNPR approach improves clustering performance
Figure 5: Empirical consistency of clustering using GNPR as

3.3 Application to financial time series clustering

3.3.1 Clustering assets: a (too) strong focus on correlation

It has been noticed that straightfoward approaches automatically discover sector and industries (Mantegna, 1999). Since detected patterns are blatantly correlation-flavoured, it prompted econophysicists to focus on correlations, hierarchies and networks (Tumminello et al., 2010) from the Minimum Spanning Tree and its associated clustering algorithm the Single Linkage to the state of the art (Musmeci et al., 2014) exploiting the topological properties of the Planar Maximally Filtered Graph (Tumminello et al., 2005) and its associated algorithm the Directed Bubble Hierarchical Tree (DBHT) technique (Song et al., 2012). In practice, econophysicists consider the assets log returns and compute their correlation matrix. The correlation matrix is then filtered thanks to a clustering of the correlation-network (Di Matteo et al., 2010) built using similarity and dissimilarity matrices which are derived from the correlation one by convenient ad hoc transformations. Clustering these correlation-based networks (Onnela et al., 2004) aims at filtering the correlation matrix for standard portfolio optimization (Tola et al., 2008). Yet, adopting similar approaches only allow to retrieve information given by assets co-movements and nothing about the specificities of their returns behaviour, whereas we may also want to distinguish assets by their returns distribution. For example, we are interested to know whether they undergo fat tails, and to which extent.

3.3.2 Clustering credit default swaps

We apply the GNPR approach on financial time series, namely daily credit default swap (Hull, 2006) (CDS) prices. We consider the most actively traded CDS according to DTCC (http://www.dtcc.com/). For each CDS, we have observations corresponding to historical daily prices over the last 9 years, amounting for more than one million data points. Since credit default swaps are traded over-the-counter, closing time for fixing prices can be arbitrarily chosen, here 5pm GMT, i.e. after the London Stock Exchange trading session. This synchronous fixing of CDS prices avoids spurious correlations arising from different closing times. For example, the use of close-to-close stock prices artificially overestimates intra-market correlation and underestimates inter-market dependence since they have different trading hours (Martens and Poon, 2001). These CDS time series can be consulted on the web portal http://www.datagrapple.com/.

Assuming that CDS prices follow random walks, their increments are i.i.d. random variables, and therefore the GNPR approach can be applied to the time series of prices variations, i.e. on data , . Thus, for aggregating CDS prices time series, we use a clustering algorithm (for instance, Ward’s method (Ward, 1963)) based on the GNPR distance matrices between their variations.

Using GNPR , we look for distribution information in our CDS dataset. We observe that clustering based on the GNPR distance matrix yields 4 clusters which fit precisely the multi-modal empirical distribution of standard deviations as can be seen in Fig. 6. For GNPR , we display in Fig. 7 the rank correlation distance matrix obtained. We can notice its hierarchical structure already described in many papers, e.g. (Mantegna, 1999), (Brida and Risso, 2010), focusing on stock markets. There is information in distribution and in correlation, thus taking into account both information, i.e. using GNPR

, should lead to a meaningful clustering. We verify this claim by using stability as a criterion for validation. Practically, we consider even and odd trading days and perform two independent clusterings, one on even days and the other one on odd days. We should obtain the same partitions. In Fig. 

8, we display the partitions obtained using the GNPR approach next to the ones obtained by applying a distance on prices returns. We find that GNPR clustering is more stable than on returns clustering. Moreover, clusters obtained from GNPR are more homogeneous in size.

Figure 6: Standard Deviation Histogram. The clusters found using GNPR represented by the 4 colors fit precisely the multi-modal distribution of standard deviations.
Figure 7: Centered Rank Correlation Distance Matrix. GNPR exhibits a hierarchical structure of correlations: first level consists in Europe, Japan and US; second level corresponds to credit quality (investment grade or high yield); third level to industrial sectors.
Figure 8: Better clustering stability using the GNPR approach: GNPR achieves ARI = 0.85; on returns achieves ARI 0.64; The two leftmost partitions built from GNPR on the odd/even trading days sampling look similar: only a few CDS are switching from clusters; The two rightmost partitions built using a on returns display very inhomogeneous (odd-2,3,9 vs. odd-4,14,15) and unstable (even-1 splitting into odd-3 and odd-2) clusters.

To conclude on the experiments, we have highlighted through clustering that the presented approach leveraging dependence and distribution information leads to better results: finer partitions on synthetic test cases and more stable partitions on financial time series.

4 Discussion

In this paper, we have exposed a novel representation of random variables which could lead to improvements in applying machine learning techniques on time series describing underlying i.i.d. stochastic processes. We have empirically shown its relevance to deal with random walks and financial time series. We have led a large scale experiment on the credit derivatives market notorious for not having Gaussian but heavy-tailed returns, first results are available on website www.datagrapple.com. We also intend to lead such clustering experiments for testing applicability of the method to areas outside finance. On the theoretical side, we plan to improve the aggregation of the correlation and distribution part by using elements of information geometry theory and to study the consistency property of our method.

Acknowledgements

We thank Frank Nielsen, the anonymous reviewers, and our colleagues at Hellebore Capital Management for giving feedback and proofreading the article.

References

  • Arthur and Vassilvitskii (2007) Arthur, D., Vassilvitskii, S., 2007. k-means++: the advantages of careful seeding. SODA ’07: Proceedings of the eighteenth annual ACM-SIAM symposium on Discrete algorithms, 1027–1035.
  • Ben-David et al. (2006) Ben-David, S., Von Luxburg, U., Pál, D., 2006. A sober look at clustering stability. Learning Theory.
  • Berndt and Clifford (1994) Berndt, D., Clifford, J., 1994. Using Dynamic Time Warping to Find Patterns in Time Series. KDD workshop 10, 359–370.
  • Brida and Risso (2010) Brida, G., Risso, A., 2010. Hierarchical structure of the German stock market. Expert Systems with Applications 37, 3846–3852.
  • Carlsson and Mémoli (2010) Carlsson, G., Mémoli, F., 2010. Characterization, Stability and Convergence of Hierarchical Clustering Methods. The Journal of Machine Learning Research 11, 1425–1470.
  • Costa et al. (2014) Costa, S., Santos, S., Strapasson, J., 2014. Fisher information distance: a geometrical reading. Discrete Applied Mathematics.
  • Deheuvels (1979) Deheuvels, P., 1979. La fonction de dépendance empirique et ses propriétés. Un test non paramétrique d’indépendance. Acad. Roy. Belg. Bull. Cl. Sci.(5) 65, 274–292.
  • Deheuvels (1981) Deheuvels, P., 1981. An asymptotic decomposition for multivariate distribution-free tests of independence.

    Journal of Multivariate Analysis 11, 102–113.

  • Di Matteo et al. (2010) Di Matteo, T., Pozzi, F., Aste, T., 2010. The use of dynamical networks to detect the hierarchical organization of financial market sectors. The European Physical Journal B - Condensed Matter and Complex Systems.
  • Ding et al. (2008) Ding, H., Trajcevski, G., Scheuermann, P., Wang, X., Keogh, E., 2008. Querying and Mining of Time Series Data: Experimental Comparison of Representations and Distance Measures.
  • Efron (1979) Efron, B., 1979. Bootstrap Methods: Another Look at the Jackknife. The Annals of Statistics, 1–26.
  • Fama (1965) Fama, E.F., 1965. The Behavior of Stock-Market Prices. The Journal of Business 38, 34–105.
  • Fredricks and Nelsen (2007) Fredricks, G., Nelsen, R., 2007.

    On the relationship between Spearman’s rho and Kendall’s tau for pairs of continuous random variables.

    Journal of Statistical Planning and Inference 137, 2143–2150.
  • Frey and Dueck (2007) Frey, B., Dueck, D., 2007. Clustering by passing messages between data points. science 315, 972–976.
  • Henderson et al. (2015) Henderson, K., Gallagher, B., Eliassi-Rad, T., 2015. EP-MEANS: An Efficient Nonparametric Clustering of Empirical Probability Distributions.
  • Hubert and Arabie (1985) Hubert, L., Arabie, P., 1985. Comparing partitions. Journal of classification 2, 193–218.
  • Hull (2006) Hull, John C., 2006. Options, futures, and other derivatives. Pearson Education
  • Kelly and Jiang (2014) Kelly, B., Jiang, H., 2014. Tail risk and asset prices. Review of Financial Studies.
  • Khaleghi et al. (2012) Khaleghi, A., Ryabko, D., Mary, J., Preux, P., 2012. Online clustering of processes 22, 601–609.
  • Lange et al. (2004) Lange, T., Roth, V., Braun, M., Buhmann, J., 2004. Stability-based model selection. Neural computation 16, 1299–1323.
  • Lemieux et al. (2014) Lemieux, V., Rahmdel, P., Walker, R., Wong, B.L., Flood, M., 2014. Clustering Techniques And their Effect on Portfolio Formation and Risk Analysis.

    Proceedings of the International Workshop on Data Science for Macro-Modeling, 1–6.

  • Levine and Domany (2001) Levine, E., Domany, E., 2001. Resampling method for unsupervised estimation of cluster validity. Neural computation 13, 2573–2593.
  • Lin et al. (2003) Lin, J., Keogh, E., Lonardi, S., Chiu, B., 2003. A symbolic representation of time series, with implications for streaming algorithms. Proceedings of the 8th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery, 2–11.
  • Mantegna (1999) Mantegna, R.N., 1999. Hierarchical structure in financial markets. European Physical Journal B 11, 193–197.
  • Martens and Poon (2001) Martens, M., Poon, S., 2001. Returns synchronization and daily correlation dynamics between international stock markets. Journal of Banking & Finance 25, 1805–1827.
  • Meinshausen and Bühlmann (2010) Meinshausen, N., Bühlmann, P., 2010. Stability selection. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 72, 417–473.
  • Monti et al. (2003) Monti, S., Tamayo, P., Mesirov, J., Golub, T., 2003. Consensus clustering: a resampling-based method for class discovery and visualization of gene expression microarray data. Machine learning 52, 91–118.
  • Musmeci et al. (2014) Musmeci, N., Aste, T., Di Matteo, T., 2014. Relation between Financial Market Structure and the Real Economy: Comparison between Clustering Methods.
  • Onnela et al. (2004) Onnela, J-P., Kaski, K., Kertész, J., 2004. Clustering and information in correlation based financial networks. The European Physical Journal B-Condensed Matter and Complex Systems 38, 353–362.
  • Pedregosa et al. (2011) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., Vanderplas, J., Passos, A., Cournapeau, D., Brucher, M., Perrot, M., Duchesnay, E., 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research 12, 2825–2830.
  • Percival and Walden (2006) Percival, D., Walden, A., 2006. Wavelet methods for time series analysis.
  • Póczos et al. (2012) Póczos, B., Ghahramani, Z., Schneider, J., 2012. Copula-based Kernel Dependency Measures.
  • Ryabko (2010) Ryabko, D., 2010. Clustering processes. ICML.
  • Shamir et al. (2007) Shamir, O., Tishby, N., 2007. Cluster Stability for Finite Samples. NIPS.
  • Shamir et al. (2008) Shamir, O., Tishby, N., 2008. Model selection and stability in k-means clustering. Learning Theory.
  • Sklar (1959) Sklar, A., 1959. Fonctions de répartition à n dimensions et leurs marges.
  • Song et al. (2012) Song, W.M., Matteo, D., Aste, T., 2012. Hierarchical Information Clustering by Means of Topologically Embedded Graphs. PLoS ONE 7, e31929+.
  • Tola et al. (2008) Tola, V., Lillo, F., Gallegati, M., Mantegna, R.N. 2008. Cluster analysis for portfolio optimization. Journal of Economic Dynamics and Control 32, 235–258.
  • Tumminello et al. (2005) Tumminello, M., Aste, T., Di Matteo, T., Mantegna, R.N., 2005. A tool for filtering information in complex systems. Proceedings of the National Academy of Sciences USA 102, 10421–10426.
  • Tumminello et al. (2010) Tumminello, M., Lillo, F., Mantegna, 2010. Correlation, hierarchies, and networks in financial markets. Journal of Economic Behaviour and Organization.
  • Ward (1963) Ward, J.H., 1963. Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association 58, 236–244.