Introducing higher order correlations to marginals' subset of multivariate data by means of Archimedean copulas

In this paper, we present the algorithm that alters the subset of marginals of multivariate standard distributed data into such modelled by an Archimedean copula. Proposed algorithm leaves a correlation matrix almost unchanged, but introduces a higher order correlation into a subset of marginals. Our data transformation algorithm can be used to analyse whether particular machine learning algorithm, especially a dimensionality reduction one, utilises higher order correlations or not. We present an exemplary application on two features selection algorithms, mention that features selection is one of the approaches to dimensionality reduction. To measure higher order correlation, we use multivariate higher order cumulants, hence to utilises higher order correlations be to use the Joint Skewness Band Selection (JSBS) algorithm that uses third-order multivariate cumulant. We show the robust performance of the JSBS in contrary to the poor performance of the Maximum Ellipsoid Volume (MEV) algorithm that does not utilise such higher order correlations. With this result, we confirm the potential application of our data generation algorithm to analyse a performance of various dimensionality reduction algorithms.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

03/21/2018

Hiding higher order cross-correlations of multivariate data using Archimedean copulas

In this paper we present the algorithm that changes the subset of margin...
09/02/2020

An Approximation Scheme for Multivariate Information based on Partial Information Decomposition

We consider an approximation scheme for multivariate information assumin...
03/11/2022

Interpretable machine learning in Physics

Adding interpretability to multivariate methods creates a powerful syner...
03/29/2018

The use of fourth order cumulant tensors to detect outlier features modelled by a t-Student copula

In this paper we use multivariate cumulant of order 4 to distinguish bet...
11/06/2020

A Simple Algorithm for Higher-order Delaunay Mosaics and Alpha Shapes

We present a simple algorithm for computing higher-order Delaunay mosaic...
06/29/2021

Tensor decomposition of higher-order correlations by nonlinear Hebbian plasticity

Biological synaptic plasticity exhibits nonlinearities that are not acco...
11/23/2018

Selected Methods for non-Gaussian Data Analysis

The basic goal of computer engineering is the analysis of data. Such dat...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

1 Introduction

While analysing real-life multivariate data such as financial data, e-commerce data, biomedical data, audio signals or high resolution images [1, 2, 3, 4, 5], we have many features that carry valuable information. However, if a number of features is large, the computational cost of proceeding such data is high. To transfer such data to a smaller set of features, in such a way that most of the meaningful information is preserved, we can use a dimensionality reduction scheme [6]

. Many dimensionality reduction schemes asses the importance of features by analysing their correlation or covariance. Hence, assuming that data follow a multivariate Gaussian distribution

[7]

. Such methods may be ineffective if dealing with information hidden in higher order correlations. As an example, one can consider the Principal Component Analysis (PCA)

[8]

that converts data into a subset of independent features via linear transformations determined by the eigenvectors of a covariance matrix. The PCA works well on data for which the second order correlations dominate. However, the real data may possess higher order dependencies

[9, 10, 11, 12], which in turn may change the optimal result.

While it is usually easy to determine how higher correlations are considered for statistics based algorithm, in more advanced machine-learning algorithms it is not obvious to what extend higher order correlations influence the output. As a straight example consider a Kernel enhanced PCA [13, 14] or Kernel enhanced discriminant analysis [15], where the method’s utility depends on the particular choice of the Kernel function given a particular data set. For further discussion of non linear dimensionality reduction algorithms see also [16]

. There is as well a neural network approach to dimensionality reduction

[17]

where a dimensionality reduction for an outlier detection was performed by means of a neural networks on both artificially generated and real data. Another neural networks example that uses auto-encoders is discussed in

[18, 19]. The properly configured neural network (using auto-encoders) detects subtle anomalies successfully, but the PCA fails. This suggest the sensitivity of such neural networks on higher order correlations.

The main contributions of the presented work is the algorithm transforming multivariate Gaussian distributed data into non-Gaussian distributed one. We argue that such algorithm may be applicable to analyse various dimensionality reduction algorithms. More precisely, we present a method of transforming Gaussian distributed data into such with higher order correlations inside a chosen subset of marginals, but with a covariance (second order correlations) similar to those of original data. This is done by means of various Archimedean copulas [20], hence in addition all univariate marginal distributions are unchanged. In order to show that our method works properly, we have tested it on two distinct features selection algorithms: the MEV (Maximum Ellipsoid Volume) [21], that select features (a subset of marginals) on a basis of second order correlations, and the JSBS (Joint Skewness Band Selection) [22], that select features on a basis of third order correlations. To measure this correlations we use second and third order multivariate cumulants. We show that in contrary to MEV the JSBS can detect a subset of marginals that exhibit higher order correlations. The MEV is ineffective since the covariance matrix is almost unchanged by a data transformation.

To provide the tool for machine-learning scientists, the presented algorithm is implemented in the Julia programming language [23] and is available on a GitHub repository [24]. The Julia is high level programming language, suitable for scientific computations, because it is open source and the code can be analysed and reviewed by scientist. Apart from this, linear operations and random sampling operations implemented in the Julia takes significantly less of the processor time than similar operations implemented in other well known programming languages, see [23]. Finally Julia is easily accessible from Pyhon containing a large collection of machine-learning tools.

The paper is organized as follows. In Sec. 2 we provides basic facts concerning multivariate distribution and feature discrimination algorithms. In Sec. 3 we present and analyse our algorithm using MEV and JSBS algorithms. In Sec. 4 we sum up our results and discuss its applications and extensions.

2 Preliminaries

To measure higher order correlations between features we use higher order multivariate cumulants. A multivariate cumulant is a multivariate extension of corresponding univariate cumulant [25]. As such a multivariate cumulant of order can be represented in a form of a super-symmetric tensor  [26, 27]. Such cumulant’s tensor is the -dimensional array indexed by . The super-symmetry means here, that each permutation within gives the index that refers to the same value of the tensor as . Note that the first cumulant is an expectation and the second is a covariance matrix, that both fully describe Gaussian multivariate distribution. Importantly for Gaussian multivariate distribution cumulants of order higher than , called higher order cumulants, are zero [28, 29]. Concluding we have zero higher order correlations there.

Non-zero higher order correlations can be introduced by means of copulas. A copula

is a join multivariate cumulative distribution function with uniform marginal on a

segment [30]. A sub-copula at index is a joint multivariate cumulative distribution of a subset of marginals. There is an important family of copulas called Archimedean copulas, recently used to model various types of real life data such as: financial data [31, 32, 33, 34], hydrological data [35, 36], signals [37], wireless communication data [38] or biomedical data [39]. The Archimedean copula is introduced by a copula generator function .

Let be continuous function, parametrised by , such that and . Furthermore let be a strictly decreasing function on , where is its pseudoinverse fulfilling . Finally let be the -monotone in according to Definition  in [20]. Having introduced the generator function, the Archimedean copula takes the following form

(1)

where . Well known examples of Archimedean copulas are Gumbel, Clayton and Ali-Mikhail-Haq (AMH) [40] ones. All of them are parametrized by a single real-value parameter . A properties of those copulas are presented in Table 1. The Spearman’s correlation between marginal variables modelled by a copula takes the form [41]

(2)

where is a sub-copula for marginal variables and . For copulas presented in Tab. 1 the Spearman’s correlation depends monotonically on the parameter, and hence uniquely determines it, see Fig. 4. However for Archimedean copulas such bivariate Spearmann’s correlation do not carry all information about dependency between marginals.

In contrary to multivariate Gaussian distribution, higher order cumulants for Archimedean copula are not necessarily zero tensors, see Fig. 8. This fact implies higher order dependence between marginals. Due to its symmetry, an Archimedean copula with identical univariate marginal distributions produces only three distinct elements of the rd cumulant’s tensor. Those are the super-diagonal element, the partial-diagonal one and the off-diagonal one. The covariance matrix of such copula model that have only two distinct elements: the diagonal one and the off-diagonal one.

Copula name values values
Gumbel
Clayton
AMH
Table 1: Definition of Gumbel, Clayton and Ali-Mikhail-Haq copulas, and a possible correlation values. Note that Spearman’s correlation does not depend on choice of univariate marginal distributions.
(a) Clayton
(b) Gumbel
(c) AMH
Figure 4: Relation between copula’s parameter and Spearman’s correlation given by Eq. (2) for various Archimedean copulas.
(a) Clayton copula.
(b) Gumbel copula.
(c) AMH copula.
Figure 8: Distinguishable elements of th cumulant tensor for Archimedean copulas with Gaussian standard univariate marginals, computed for samples. Due to symmetry properties of the Archimedean copulas only elements are distinguishable: the super-diagonal element at index , the partial-diagonal element at index and the off-diagonal element at index . In a case of the Clayton copula we present theoretical outcomes [42] that are consistent with simulation ones, errors of simulation outcomes are negligible. The super-diagonal element values are zero due to univariate Gaussian marginals. Apart from this other cumulants’ elements are non-zero.

For sampling Archimedean copulas we use modified Marshall-Olkin Algorithm. The original algorithm is presented in Alg. 1. Note that the algorithm requires sample of inverse Laplace–Stieltjes transform of the Archimedean copula generator .

1:Input: – generating function of the Archimedean copula, samples of distribution, – sample of distribution.
2:Output: sample of Archimedean copula.
3:function arch_sampler(, , )
4:     
5:     
6:     for  do
7:         
8:     end for
9:     return
10:end function
Algorithm 1 Marshall-Olkin algorithm [43, 44] sampling an Archimedean copula.

2.1 MEV and JSBS algorithms

Having introduced multivariate cumulants we can discuss now cumulants based features selection algorithms. Let be the

dimensional sample from the random vector

. Here is a vector of realisations of th feature (marginal) while a single realisation of . The feature selection algorithm chooses a subset , which should provide as much information as possible comparing to original . Alternatively the feature selection problem can be understood as providing new order of marginals , which represents their importance. Then for fixed subset are a representative collection of the original .

MEV (Maximum Ellipsoid Volume) [21] is a feature selection algorithm, that iteratively removes the least informative variable. The choice bases on the maximisation of a hyper-ellipsoid volume in a eigenvector space of a covariance matrix of reminding marginals. In details for each marginal variable the algorithm computes a determinant of subcovariance matrix which is constructed from the original one by removing -th column and row. The variable, which provides smallest determinant is considered to be the least informative at this point. Then the MEV algorithm recursively searches for consecutive variable in remaining collection. This procedure provides the information order of marginal variables. However, as described above, the MEV algorithm is based on the covariance matrix being a second cumulant of multivariate data. Hence if for some subset of marginals cumulant of higher order is non-zero, it may be ignored by the MEV algorithm.

The JSBS [22] algorithm is a natural extension of MEV algorithm, which analyse the third-order multivariate cumulant. Since such cumulant can be represented as the -mode tensor, for which determinant is not well defined, in [22] authors have optimized the following target function:

(3)

where is an unfolded third cumulant in mode and is a covariance matrix. As cumulants are super-symmetric [26, 27], it is not important in which mode they are unfolded. The

(Joint Skewness) can be interpreted as product of singular values taken from HOSVD of the third cumulant’s tensor divided by the product of the covariance matrix eigenvalues raised to power

.

3 The algorithm

In this section we propose and analyse algorithm, which transforms part of the normally distributed data. Our goal is to replace part of the originally Gaussian data by samples distributed according to various copulas. Our algorithm allows replacing arbitrary subset of marginal variables of size

by chosen Archimedean copula. In particular we focused on copulas presented in Sec. 2. We use proposed algorithm to show difference between detection using the MEV and the JSBS algorithm.

3.1 Data malformation algorithm

Suppose we have that are samples from Gaussian -variate distribution and we want to replace all realisations of marginals, i.e.  by modelled by Archimedean -variate copula, but leave unchanged other marginals i.e. those indexed by . If we denote new data by , ideally we would expect to make a transformation hard to detect by methods using the second order correlations only.

Our algorithm, presented in Alg. 2

, takes several steps: first data are standardized, so all marginal variables have zero mean and variance

. Then based on these we produce new samples of -variate random vector with independent marginals, all distributed according to  i.e.

 uniform distribution on

. Those new core samples are generated in an information preserving way using Alg. 3 or naively for reference. Then the parametrization of Archimedean copula is derived. New samples are produced by means of Alg. 1 given core samples and the parameter. Finally original univariate distributions (Gaussian in our case) are recovered.

Let us first focus on the parameter derivation. While several approaches can be considered here, our method bases on the fact that correlation matrix of the Archimedean copula is constant outside the diagonal. We calculate the mean value of an upper-triangle of the Spearman’s correlation matrix of data , and then recover the value thanks to Eq. (2).

Now let us consider the second part of the algorithm, which is presented in Alg. 3. Observe first, that Alg. 1 converts samples of distribution into a sample of -variate Archimedean copula. The naive approach would be to generate independently elements of from distribution, and transform them using Alg. 1 to samples of

-variate Archimedean copula. Finally one transforms univariate marginals by quantile functions of original univariate frequency distributions (Gaussian in our case) to

. Such naive approach preserves univariate marginal distributions due to the copula approach, and roughly preserves a correlation inside a subset if copula parameter is chosen properly, but results in almost no correlation between changed and not-changed marginals. This implies , which would made a detection easy for methods based on the second order correlation (the covariance matrix).

To overcome this problem we need to collect information about the general correlation between subsets of changed and non changed marginals and input it into Alg. 1. To store such information we use the column of matrix i.e. the vector and substitute its elements for in Alg. 1. In this algorithm we use a function

(4)

to compute a sample of th marginal of the Archimedean copula. By the Archimedean copula generator definition, such function is strictly increasing in for an arbitrary constant . Furthermore, is a sample of inverse Laplace–Stieltjes transform of the copula generator , which is the CDF function for the range of considered in this paper, see Tab. 1 and [44]. Its inverse (a quantile function) is strictly increasing if is continuous and non-decreasing if is discrete. Hence the composition is strictly increasing for continuous , and non-decreasing for discrete .

In a case of the Gumbel and the Clayton copula we have [44]

(5)

for , where

is a random variable that is distributed according to

. In this case, if we extract most of information about the correlation between changed and unchanged marginals into vector , we can carry this information (in a sense of ordering) through Alg. 1. In a case of the AMH copula we have discrete fulfilling a discrete version of the transform. In this case some information may be lost, since the function in not strictly increasing. Finally let us take a particular case of the Gumbel copula where is a Lévy stable distribution [44] without analytical form. Still, its element-wise sampling is discussed in [45, 46]. Hence we sample realisations of appropriate Lévy stable distribution using [45, 46] and sort an outcome according to . Such generated data is used in Alg 1. Since for large outcome would converge to those from quantile function, we found our approach well motivated. This approach can be used for other copulas not considered in this paper, for which is not known.

Following this discussion, our approach to preserve a correlation between changed marginals and unchanged marginals concerns preparing from data included in marginals that are changed. For the sake of Alg. 1, must contain realisations of independent uniform marginals [43, 44]. However those marginals can be correlated with reminding marginals of unchanged data, which should be preserve in changed data. Hence in Alg. 3 the eigenvector decomposition is performed in such a way that th eigenvalue is highest, i.e. th marginal carries largest information. We include this information in . Note that function (4) is strictly increasing in for constant as well. Hence substituting elements of for in Alg. 1 will carry an information through the Algorithm as well. We call it local information since correspond to the th marginal of the Archimedean copula. It is in contrary to that carry global information via in Equation (4). Concluding, in our approach the global information is more significant that the local one.

1:Input: realization of -variate normal distribution, dist – label denoting replacing Archimedean copula, ind – variables indexes which are replaced.
2:Output: malformed data within copula dist on marginal subset ind.
3:function arch_copula_malformation(, dist, ind)
4:      variance diagonal matrix of
5:      mean vector of
6:      length of ind
7:     for  do
8:         
9:     end for
10:     
11:     
12:      or with core_naive(ind)
13:     derive generator given dist
14:     derive parameter based on
15:     for  do
16:         
17:         
18:     end for
19:     return
20:end function
Algorithm 2 Change a part of multivariate normal distributed data into Archimedean copula distributed one with similar covariance; is a shortage for ‘all elements with second index in ind’.
1:Input: - realization of -variate normal distribution with standard normal marginals
2:Output: realisations of independent .
3:function core()
4:      , append column from
5:     
6:     
7:     for l=1,…, t do
8:         
9:         for i=1,…, k+1 do
10:              
11:         end for
12:     end for
13:     return
14:end function
Algorithm 3 core function for generating initial data form multivariate uniform distribution

Algorithms presented here are implemented in a Julia programming language [23] and can be found on a GitHub repository [24], see gcop2arch function therein.

3.2 MEV vs JSBS algorithms

In this section we show, that our algorithm generates data which distinguishes MEV and JSBS algorithms. Our experiment is based on -dimensional vector. We chose random covariance matrix with ones on a diagonal, and generate multivariate normal samples with mean vector and covariance . The covariance matrix was chosen as follows. First we chose , which elements are distributed according to uniform distribution . Let be diagonal matrix of . Our covariance matrix take form .

Then 8 random variables were changed according to our algorithm, using both core presented in Alg. 3 and core_naive, which generates samples according to uniform distribution. First kind of samples was considered using MEV and JSBS algorithms, second kind were analysed using MEV algorithm only.

In Fig. 11 we present exemplary results on data malformed using Gumbel copula. The figure presents how many from 8 detected marginals are the malformed one. We observe, that in each case the JSBS algorithm have detected all malformed marginals. The MEV yields result similar to the random guess algorithm. Contrary we can see, that malformed data by means of the naive algorithm core_naive were partially detected by the MEV. We explain this fact by the large influence of the naive algorithm on the covariance matrix, see Fig. (b)b. As discussed in the previous subsection, in the naive algorithm case we have zero correlation between malformed and not malformed subsets, which is not true in the core algorithm case.

3.3 Algorithm analysis

(a) Number of samples, for which given numbered of malformed variables were found
(b) Relative covariance change for all samples
Figure 11: Comparison of the core and the naive algorithms on data malformed using Gumbel copula. (a)a: number of experiments, for which given number of malformed marginals were found. ‘theoretical’ is the theoretical number of ‘detected’ marginals in the case of a random guess. In all cases we have malformed marginals an analyse detected ones (left from the elimination procedure). (b)b: relative change of covariance matrix calculated according to for all experiments.
(a) efficiency of MEV algorithm on malformed data using different copulas
(b) efficiency of JSBS algorithm on malformed data using different copulas
Figure 14: Comparison of (a)a MEV and (b)b JSBS algorithms on malformed data using the core algorithm, different copulas, and different methods of covariance matrices generation. Note for constant and random covariance matrices JSBS achieve full efficiency.
(a) MEV algorithm
(b) JSBS algorithm
Figure 17: Efficiency of (a)a the MEV and (b)b the JSBS algorithm for noised Toeplitz correlation matrix with parameter . Note that the MEV discriminability is not affected significantly by the noise parameter value, while the JSBS discriminability in general rises with the noise strength.

In this section we analyse our algorithms using Clayton, Gumbel and AMH Archimedean copulas and different covariance matrices generators. In particular we have analysed

  1. true random covariance matrix , generated as described in Section 3.2,

  2. constant correlation matrix , where the correlation between two different marginal random variables equal to a free parameter ,

  3. Toeplitz correlation matrix , where the correlation between -th and -th elements equals for ,

  4. noised version of Toeplitz correlation matrix [47], where corresponds to noise impact.

Our measure of goodness of discrimination is the percentage of correctly discriminated variables, i.e. if experiment has correctly recognized malformed variables, then the mean discriminability is defined as

(6)

Note that is achieved when all malformed variables were correctly recognized for all experiments. Remark, that random guess would give the expectedly discriminability . We claim, that our algorithm works well if the discriminability of MEV algorithm is close to the theoretical, while for JSBS it is much higher.

Let us analyze the algorithm. In Fig. 14 we show that its efficiency depends both on the choice of the correlation matrix generation method and the Archimedean copula used in the algorithm. For constant correlation matrix, the JSBS achieves full discriminability for all copulas, however the MEV still produces discriminability much higher than the random choice, especially in the AMH copula case given the parameter of a constant correlation matrix. For random correlation matrix generation, JSBS achieved almost full discriminability for all copulas. In the case of MEV the efficiency was high for AMH, but small for Gumbel copula.

Note, that the AMH copula can achieve only correlation, hence covariance matrices, implying high overall correlations, may be affected by the algorithm with the AMH copula. Further as discussed in subsection 3.1 the algorithm with the AMH copula may lose some information due to the discrete inverse Laplace–Stieltjes transform of the AMH copula generator. This observation may explain higher detectability by the MEV of data malformed using the AMH copula and constant correlation matrix parametrised by .

The best stability of results can be observed for Toepltiz correlation matrices. Here the MEV achieves similar efficiency as the random choice for . The JSBS achieves good discriminability, for all copulas, starting from the Toeplitz parameter . Here the Clayton copula produces almost full efficiency.

In Fig. 17 we present impact of noise in the Toeplitz correlation matrix, on the detection performance. We observe that MEV performance does not depend on the noise parameter significantly. This is not the case for the JSBS, where in the case of Gumbel and AMH copulas the larger the noise parameter , the higher the discriminability is. The Clayton copula outcomes are good for all values of the .

To conclude, the introduction of noise improves JSBS’s and MEV’s detection, but in the latter the case the change is negligible. In our opinion, this is due to the fact, that we lose some information while performing data malformed using the core algorithm, however this lose is small enough to be hidden in the noise.

4 Conclusions

In this paper, we presented and analysed algorithm, which replaces a chosen subset of marginals, initially distributed by a multivariate Gaussian distribution by those distributed according to an Archimedean copula. While our algorithm is designed for particular Archimedean families, it can be easily generalised to other Archimedean copulas. The correctness of our algorithm was numerically confirmed by comparing two distinct feature selection algorithms: the JSBS that utilises higher order correlations and the MEV that does not utilise higher order correlations. The comparison was performed for a various choice of correlation matrices. While for some correlation matrices MEV has provided almost random results, JSBS in most cases has given almost full discrimination. Hence in our opinion, the algorithm can be used to provide data for analysis of various complicated machine-learning algorithms. Such analysis would determine, whether the algorithm utilises higher order correlations or not. One should note that our algorithm does not affect a covariance matrix significantly and leaves all univariate marginal distributions unchanged as well as it does not reshuffle realisations of data. It is why the algorithm does not introduce other factors apart from higher order correlations.

The resulting algorithm can be generalised in various direction. First, we believe that the effect on a covariance matrix can be diminished, this would reduce discriminability of the MEV algorithm. Second, the algorithm can be generalised into a more extensive collection of copulas, not necessarily involving Archimedean ones. We can mention here the Marshall-Olkin bivariate copula or the Fréchet maximal copula derived from the Fréchet–Hoeffding upper copula bound.

Acknowledgments

The research was partially financed by the National Science Centre, Poland – project number 2014/15/B/ST6/05204. The authors would like to thank Jarosław Adam Miszczak for revising the manuscript and discussion.

References

  • [1] J. Fan, J. Lv, and L. Qi, “Sparse high-dimensional models in economics,” Annual Review of Economics, vol. 3, pp. 291–317, 2011.
  • [2] T. Ando and J. Bai, “Clustering huge number of financial time series: A panel data approach with high-dimensional predictors and factor structures,” Journal of the American Statistical Association, vol. 112, no. 519, pp. 1182–1198, 2017.
  • [3] Y. Dai, B. Hu, Y. Su, C. Mao, J. Chen, X. Zhang, P. Moore, L. Xu, and H. Cai, “Feature selection of high-dimensional biomedical data using improved SFLA for disease diagnosis,” in Bioinformatics and Biomedicine (BIBM), 2015 IEEE International Conference on, pp. 458–463, IEEE, 2015.
  • [4] W. Li, G. Wang, and K. Li, “Clustering algorithm for audio signals based on the sequential Psim matrix and Tabu Search,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2017, no. 1, p. 26, 2017.
  • [5] K. Cordes, L. Grundmann, and J. Ostermann, “Feature evaluation with high-resolution images,” in International Conference on Computer Analysis of Images and Patterns, pp. 374–386, Springer, 2015.
  • [6] S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” science, vol. 290, no. 5500, pp. 2323–2326, 2000.
  • [7] R. O. Duda, P. E. Hart, and D. G. Stork, Pattern classification. John Wiley & Sons, 2012.
  • [8] I. T. Jolliffe, “Principal components as a small number of interpretable variables: some examples,” Principal Component Analysis, pp. 63–77, 2002.
  • [9]

    E. Jondeau, E. Jurczenko, and M. Rockinger, “Moment component analysis: An illustration with international stock markets,”

    Journal of Business & Economic Statistics, pp. 1–23, 2017.
  • [10] J. C. Arismendi and H. Kimura, “Monte Carlo Approximate Tensor Moment Simulations,” Available at SSRN 2491639, 2014.
  • [11] H. Becker, L. Albera, P. Comon, M. Haardt, G. Birot, F. Wendling, M. Gavaret, C.-G. Bénar, and I. Merlet, “EEG extended source localization: tensor-based vs. conventional methods,” NeuroImage, vol. 96, pp. 143–157, 2014.
  • [12]

    M. Geng, H. Liang, and J. Wang, “Research on methods of higher-order statistics for phase difference detection and frequency estimation,” in

    Image and Signal Processing (CISP), 2011 4th International Congress on, vol. 4, pp. 2189–2193, IEEE, 2011.
  • [13] B. Schölkopf, A. Smola, and K.-R. Müller, “Nonlinear component analysis as a kernel eigenvalue problem,” Neural computation, vol. 10, no. 5, pp. 1299–1319, 1998.
  • [14]

    H. Hoffmann, “Kernel PCA for novelty detection,”

    Pattern recognition, vol. 40, no. 3, pp. 863–874, 2007.
  • [15] G. Baudat and F. Anouar, “Generalized discriminant analysis using a kernel approach,” Neural computation, vol. 12, no. 10, pp. 2385–2404, 2000.
  • [16] J. A. Lee and M. Verleysen, Nonlinear dimensionality reduction. Springer Science & Business Media, 2007.
  • [17]

    M. Sakurada and T. Yairi, “Anomaly detection using autoencoders with nonlinear dimensionality reduction,” in

    Proceedings of the MLSDA 2014 2nd Workshop on Machine Learning for Sensory Data Analysis, p. 4, ACM, 2014.
  • [18] W. Wang, Y. Huang, Y. Wang, and L. Wang, “Generalized autoencoder: A neural network framework for dimensionality reduction,” in

    Proceedings of the IEEE conference on computer vision and pattern recognition workshops

    , pp. 490–497, 2014.
  • [19] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with neural networks,” science, vol. 313, no. 5786, pp. 504–507, 2006.
  • [20] A. J. McNeil and J. Nešlehová, “Multivariate archimedean copulas, d-monotone functions and -norm symmetric distributions,” The Annals of Statistics, pp. 3059–3097, 2009.
  • [21] C. Sheffield, “Selecting band combinations from multispectral data,” Photogrammetric Engineering and Remote Sensing, vol. 51, pp. 681–687, 1985.
  • [22] X. Geng, K. Sun, L. Ji, H. Tang, and Y. Zhao, “Joint Skewness and Its Application in Unsupervised Band Selection for Small Target Detection,” Scientific reports, vol. 5, 2015.
  • [23] J. Bezanson, A. Edelman, S. Karpinski, and V. B. Shah, “Julia: A fresh approach to numerical computing,” SIAM Review, vol. 59, no. 1, pp. 65–98, 2017.
  • [24] K. Domino and A. Glos, “DatagenCopulaBased.jl.” https://doi.org/10.5281/zenodo.1213710, 2018.
  • [25] P. McCullagh and J. Kolassa, “Cumulants,” Scholarpedia, vol. 4, no. 3, p. 4699, 2009.
  • [26] M. D. Schatz, T. M. Low, R. A. van de Geijn, and T. G. Kolda, “Exploiting symmetry in tensors for high performance: Multiplication with symmetric tensors,” SIAM Journal on Scientific Computing, vol. 36, no. 5, pp. C453–C479, 2014.
  • [27] K. Domino, Ł. Pawela, and P. Gawron, “Efficient computation of higer-order cumulant tensors,” SIAM Journal on Scientific Computing, vol. 40, no. 3, pp. A1590–A1610, 2018.
  • [28] M. G. Kendall et al., “The advanced theory of statistics,” The advanced theory of statistics., no. 2nd Ed, 1946.
  • [29]

    E. Lukacs, “Characteristics functions,”

    Griffin, London, 1970.
  • [30] R. B. Nelsen, An introduction to copulas. Springer Science & Business Media, 2007.
  • [31] U. Cherubini, E. Luciano, and W. Vecchiato, Copula methods in finance. John Wiley & Sons, 2004.
  • [32] P. Embrechts, F. Lindskog, and A. McNeil, “Modelling dependence with copulas,” Rapport technique, Département de mathématiques, Institut Fédéral de Technologie de Zurich, Zurich, 2001.
  • [33] N. Naifar, “Modelling dependence structure with Archimedean copulas and applications to the iTraxx CDS index,” Journal of Computational and Applied Mathematics, vol. 235, no. 8, pp. 2459–2466, 2011.
  • [34] K. Domino and T. Błachowicz, “The use of copula functions for modeling the risk of investment in shares traded on the Warsaw Stock Exchange,” Physica A: Statistical Mechanics and its Applications, vol. 413, pp. 77–85, 2014.
  • [35] Q. Zhang, J. Li, and V. P. Singh, “Application of Archimedean copulas in the analysis of the precipitation extremes: effects of precipitation changes,” Theoretical and applied climatology, vol. 107, no. 1-2, pp. 255–264, 2012.
  • [36]

    G. Tsakiris, N. Kordalis, and V. Tsakiris, “Flood double frequency analysis: 2d-archimedean copulas vs bivariate probability distributions,”

    Environmental Processes, vol. 2, no. 4, pp. 705–716, 2015.
  • [37] X. Zeng, J. Ren, Z. Wang, S. Marshall, and T. Durrani, “Copulas for statistical signal processing (part i): Extensions and generalization,” Signal Processing, vol. 94, pp. 691–702, 2014.
  • [38] G. W. Peters, T. A. Myrvoll, T. Matsui, I. Nevat, and F. Septier, “Communications meets copula modeling: Non-standard dependence features in wireless fading channels,” in Signal and Information Processing (GlobalSIP), 2014 IEEE Global Conference on, pp. 1224–1228, IEEE, 2014.
  • [39] R. F. Silva, S. M. Plis, T. Adalı, and V. D. Calhoun, “A statistically motivated framework for simulation of stochastic data fusion models applied to multimodal neuroimaging,” NeuroImage, vol. 102, pp. 92–117, 2014.
  • [40] P. Kumar, “Probability distributions and estimation of Ali-Mikhail-Haq copula,” Applied Mathematical Sciences, vol. 4, no. 14, pp. 657–666, 2010.
  • [41] B. Schweizer and E. F. Wolff, “On nonparametric measures of dependence for random variables,” The annals of statistics, pp. 879–885, 1981.
  • [42] E. de Amo, M. D. Carrillo, J. F. Sánchez, and A. Salmerón, “Moments and associated measures of copulas with fractal support,” Applied Mathematics and Computation, vol. 218, no. 17, pp. 8634–8644, 2012.
  • [43] A. W. Marshall and I. Olkin, “Families of multivariate distributions,” Journal of the American statistical association, vol. 83, no. 403, pp. 834–841, 1988.
  • [44] M. Hofert, “Sampling Archimedean copulas,” Computational Statistics & Data Analysis, vol. 52, no. 12, pp. 5163–5174, 2008.
  • [45] A. J. McNeil, “Sampling nested Archimedean copulas,” Journal of Statistical Computation and Simulation, vol. 78, no. 6, pp. 567–581, 2008.
  • [46] J. Nolan, Stable distributions: models for heavy-tailed data. Birkhauser New York, 2003.
  • [47] J. Hardin, S. R. Garcia, and D. Golan, “A method for generating realistic correlation matrices,” The Annals of Applied Statistics, pp. 1733–1762, 2013.