A Communication-Efficient Distributed Algorithm for Kernel Principal Component Analysis

05/06/2020 ∙ by Fan He, et al. ∙ Shanghai Jiao Tong University 5

Principal Component Analysis (PCA) is a fundamental technology in machine learning. Nowadays many high-dimension large datasets are acquired in a distributed manner, which precludes the use of centralized PCA due to the high communication cost and privacy risk. Thus, many distributed PCA algorithms are proposed, most of which, however, focus on linear cases. To efficiently extract non-linear features, this brief proposes a communication-efficient distributed kernel PCA algorithm, where linear and RBF kernels are applied. The key is to estimate the global empirical kernel matrix from the eigenvectors of local kernel matrices. The approximate error of the estimators is theoretically analyzed for both linear and RBF kernels. The result suggests that when eigenvalues decay fast, which is common for RBF kernels, the proposed algorithm gives high quality results with low communication cost. Results of simulation experiments verify our theory analysis and experiments on GSE2187 dataset show the effectiveness of the proposed algorithm.

READ FULL TEXT VIEW PDF
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

page 5

page 6

Code Repositories

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.

I Introduction

Principal Component Analysis (PCA) is a fundamental technology in machine learning community. Researches on PCA and its variants, including sparse PCA [1], robust PCA [2], kernel PCA [3] , have been active for decades with wide applications in data de-noising [4], low-rank subspace factorization [5]

, features extraction

[6], etc. According to various data settings, different algorithms have been designed for PCA, for example, centralized algorithms for small datasets and stochastic algorithms for large datasets [7].

Nowadays, massive datasets are acquired in a distributed manner, bringing new challenges to traditional data analysis. When the data scale is large, transmitting all data to a single machine requires high communication cost and large memory, which is quite inefficient. Moreover, in many scenario, such as medical, biomedical, and financial tasks, data privacy and security are significant, which makes it impossible to get global data. From these reasons, distributed learning that can locally learn and globally synthesize information becomes very important and there are already many fantastic algorithms [8, 9]. According to different structures of data, the distribution can be generally categorised as two regimes, namely horizontally and vertically partitioned data [10, 11]. The two regimes are shown in Fig. 1: when data are partitioned horizontally, each local machine contains a subset of samples with complete features. While in the vertical regimes, each machine contains full samples but with only a subset of features.

Fig. 1: Categorizations of data partition in distributed setting.

For PCA problems, massive current researches focus on the horizontal regime [11, 12, 13, 14, 15]. The key property is the consistency between the sum of local covariance matrices and the global covariance matrix, which results in benefits for both algorithm design and theory analysis. For example, power and invert power methods could be extended to distributed setting [14, 15]

, where the global empirical covariance matrix is in-explicitly calculated from distributively matrix-vector product. Besides,

[14] adds noise during the transmission in the proposed multi-communication algorithm to protect data privacy, which however brings guaranteed loss to the accuracy. For both efficient communication and privacy protection, [15, 11] propose one-shot aggregation algorithms. In [15], eigenvectors of the global covariance matrix are estimated by averaging the local empirical risk minimizers with sign correction. [11]

gives another method that focuses on estimating the eigenspace of global covariance matrix by averaging the local eigenspaces.

However, though the vertical regime are also common in practice, e.g., in wireless sensor networks [16, 17], ranking or evaluation systems [18, 19], applicable distributed PCA algorithms are not much. In fact, in this setting, the data dimension is usually high and PCA are in high demand. Most of distributed PCA in the vertical regime is rooted in the separability such that the global projection matrix could be locally calculated however in an iterative procedure. For example, in [20, 21], power method and Oja method are combined with the average consensus algorithm. In [12], the latest work in the vertical regime, the distributed PCA is solved by coordinate descent methods combined with alternating direction method of multipliers (ADMM). Generally, the above methods can solve distributed PCA in the vertical regime but they require multi-communication rounds, which needs improvement in the view of both efficiency and privacy.

In this brief, inspired by the fact that the kernel trick can transfer the optimization variables from primal weights corresponding to features to dual variables corresponding to samples, we establish a distributed PCA in the vertical regime. Generally speaking, we solve the eigenproblem of the kernel matrix rather than that of the covariance matrix. Since the center kernel matrix can be reformulated as a linear/non-linear combination of local kernel matrices, one-shot fusing strategy can be used and achieves high efficiency. From the view of duality, a data corvariance matrix in the horizontal regime is corresponding to a kernel matrix in the vertical regime, from which it follows that the developed method shares similar properties to primal PCA in the horizontal regime. Besides, since kernel trick is used, the proposed method can be readily extended to nonlinear PCA, e.g., by applying the RBF kernel.

Notice that the existing researches on distributed KPCA are for the horizontal regime, most of which require multi-communication rounds. The main aim of introducing kernel trick is to extend distributed PCA from linear to non-linear case. However, as we pointed out, it actually breaks the good properties of the horizontal regime and thus iterative procedure is required. For example, [22] proposes to solve kernel PCA based on EM algorithm. [23] combines the subspace embedding and adaptive sampling to generate a representative subset of the original data and then performs local KPCA on it, which needs multi-communication rounds to determine the subset.

To fill this gap, we propose a communication-efficient distributed algorithm for KPCA in the vertical regime. Specifically, the first eigenvectors and their corresponding eigenvalues of local kernel matrices are calculated and sent to a fusion center, where they are aggregated to reproduce local estimators. Both linear and RBF kernels (when the global RBF kernel matrix is the Hadamard (element-wise) product of local RBF kernel matrices) are applicable. For linear kernels, the estimator of the global kernel matrix is then computed by adding up local estimators. For RBF kernels, the estimator of the global kernel matrix is the Hadmadard product of local estimators. Hence, the proposed algorithm needs only one privacy-preserving communication round. Theoretical discussion will show that the approximation error is related to the -th eigenvalue of local matrices. Thus, when eigenvalues decay fast, which is common for RBF kernels, the proposed algorithm could give high quality results.

The rest of this brief is organized as the following. We will briefly review kernel trick on PCA and model the problem in Section 2. Section 3 gives the algorithm in detail. The approximate analysis is presented in Section 4. In Section 5, numerical experiments are used to verify the theorem and evaluate the proposed methodology. Short conclusion is given in Section 6 to end this brief.

Ii Preliminaries and Problem Setup

Throughout this brief, we use regular letters for scalars, capital letters in bold for matrices and lowercase letters in bold for vectors. For matrix , represents the Frobenius norm. We use to denote the -th eigenvalue of the symmetric matrix . In this brief, we consider to solve the KPCA problem in a distributed setting, where the data are partitioned in vertical regime and stored distributedly in local machines. In addition, without loss of generality, we set the first machine to be the fusion center. For , machine acquires a zero-mean data vector , which is independently identically distributed at time . is the feature dimension of the data and we have . Let denote the center empirical data collected by all machines, which are not stored together but given for convenience. For the center empirical kernel matrix and its approximation , and are used to denote their -th eigenvalues for convenience. The kernel matrix in the -th local machine is denoted as with the corresponding eigenvectors .

Before introducing our distributed algorithm, we first briefly review the KPCA problem, of which the basic idea is to map the original data space into a feature space by an implicit nonlinear mapping . The dot product in feature space can be computed by a kernel function, i.e.,

(1)

The goal of KPCA is to diagonalize the covariance matrix in the feature space by solving the following optimization problem,

(2)

The solution is the eigenvectors of , i.e.,

(3)

Since can be rewritten as , (3) becomes

In another words, is the eigenvector of the kernel matrix , which means we can solving eigenproblem on instead of on . Such kernel trick can sidestep the problem of computing unknown and moreover, it makes the distributed computation for vertically partitioned data more convenient:

  • is not separable and the approximation by local features is not accurate, i.e., .

  • itself (linear kernel) or its main calculation part (RBF kernel) is separable, e.g., a linear kernel .

Iii Algorithm

In this brief, we propose a communication-efficient privacy-preserving distributed algorithms for KPCA with linear and RBF kernels. The algorithm could produce a good estimation to the global optimum in one-communication round with privacy protection. The algorithmic details is introduced in this section and approximation error will be analyzed in section IV.

Our basic idea is to use the eigenvectors of local kernel matrices to represent the center empirical kernel matrix . Specifically, we first calculate the top eigenvectors of with the corresponding eigenvalues in each local machine and then sent them to the fusion center, where we we aggregates these eigenvectors by a function depends on the used kernel. The calculation of the estimator could be represented as below,

For linear kernels, it holds that

(4)

Thus, the estimator is calculated as follow.

(5)

For RBF kernels, we decompose the function as follows.

(6)

where is the kernel width. Using to denote the Hadamard (element-wise) product operator, we rewrite (6) as below,

(7)

Once the eigenvector of each local kernel matrix is obtained, we can approximate the whole kernel matrix as follow,

(8)

Finally, we compute the first eigenvectors of , denoted as , and the projection matrix . Notice that for this calculation, is unknown but can be calculated in a distributed system.

For linear kernels, we have

(9)

Thus, the local machine calculates and a center machine adds up the results.

For RBF kernels, we have

(10)

where can be calculated in a distributed manner.

The overall algorithm is summarized in Alg. 1.

1:On the local nodes, calculate the local kernel matrix . Solve eigenvalue problem on , where the leading eigenvectors are and the corresponding eigenvalues are . Sent and to the center node.
2:On the center node, calculate as the approximation of . For linear kernel, . For RBF kernel, .
3:Compute the leading eigenvectors of the approximate matrix .
4:return .
Algorithm 1 One-shot distributed algorithm for kernel PCA in vertical partition regime.

Intuitively, Algorithm 1

only requires one round communication and seems quite efficient. To give rigorous analysis, we restrict our discussion on the uniformly distributed situation, i.e., the dimension of the features in local is

and there is no statistic difference on each node. The discussion on more general case is similar but with redundant items.

Note that Alg. 1 has only one communication round, where local machines send eigenvectors with their corresponding eigenvalues to the fusion center. Thus, the communication cost of Alg. 1 is . For centralized algorithms, where all data are sent to the fusion center, the communication efficiency is . Since we want high communication efficiency, tends to be much smaller than .

The computation process consists of three main parts:

  • the computation cost of calculating kernel matrix in local is .

  • the computation cost of solving the eigenproblem is (for general SVD algorithm).

  • the computation cost of estimating the global kernel matrix in the fusion center is .

Thus, the total computation cost of Alg. 1 is . Compared with centralized algorithms, which needs additional communication and fusion, Alg. 1 sacrifices computation efficiency for communication efficiency.

Let us further discuss the computation cost. When is relatively small, is ignored and the computation cost becomes . For given data, if

then the computation cost is , the same as centralized algorithms. Notice that the required condition is not strict. For example, when , , , then , which is a large range, will meet the above requiremnt and the computation cost is .

Iv Approximation Analysis

We present the approximation analysis for Alg. 1 here in both linear and RBF cases. Specifically, we study the distance between the eigenspaces spanned by , the eigenvectors of the global kernel matrix , and the estimator calculated by Alg. 1. distance is well-defined and is widely used for measuring the distance between two linear spaces [24, 11]. Let

be the singular values of

and define as follows.

(11)

The eigengap then can be given as follows

(12)

Before giving the main result, we need the following lemmas.

Lemma 1.

(Davis–Kahan’s theorem) Let and are two symmetric real matrix, whose leading eigenvectors are , respectively. Let denotes the -th eigenvalues of and is defined in (12). There holds that

(13)
Lemma 2.

Let is a kernel matrix derived by a kernel function , and is its approximation computed by Alg. 1. If is a linear kernel, then it holds that

(14)

If is a RBF kernel, then it holds that

(15)

Lemma 1 is known as Davis–Kahan’s theorem and the proof of Lemma 2 is in supplemental materials. With these lemmas, we could now present the main result in the following theorem.

Theorem 3.

Let is the first eigenvectors of the global kernel matrix that is derived by a kernel function , and is its approximation computed by Alg. 1. If is a linear kernel, then satisfy

(16)

If is a RBF kernel, then satisfy

(17)

V Experiments

The performance of Alg. 1 is evaluated in this section from three sides. First, simulation experiments are conducted to show the relationship between the communication cost and the number of local machines. Second, we compare the proposed algorithm with DPCA [12], which is the state-of-the-art distributed PCA algorithm in vertical case, however, can only deal linear PCA. Third, classification experiments on real dataset is conducted to show the effectiveness of Alg. 1.

Simulation and real data are used. The simulation data are generated as follows. (i) Generate the covariance matrix and two orthnormal matrix and . (ii) Calculate the total data by . Details of will be described later.

Real data from a drugs and toxicants response on rats dataset are used. This dataset is publicly available at the NIH GEO, under accession number GSE2187. It is collected on cRNA microarray chips with 8565 probes (features), corresponding to four categories: fibrates (107 samples), statins (93 samples), azoles (156 samples) and toxicants (181 samples). The features are removed if more than of the samples have their values missing. The rest missing values are filled with mean values.

For both simulation and real data, results calculated by performing SVD algorithm on the whole underlying kernel matrix are regarded as the ground truth. All the simulations are done with Matlab R2016b in Core i5-7300HQ 2.50GHz 8GB RAM. The codes of Alg. 1, together with the experiments, are available in https://github.com/hefansjtu/DKPCA.

V-a Communication Cost, Estimation Error, and the Number of Local Machines

We change the number of the features in local machines and the number of samples to see how error rate and running time changes. distance is considered to measure the estimate error here, which is computed by (11).

Simulation data are used to evaluate Alg. 1 in linear cases. We set the rank of to , where the first diagonal elements are and others are zero. The error and running time of Alg. 1 are reported. In Fig. 2 (a), we fix and change . One can see that as the number of local machines increases, the estimate error is similar except the extreme cases, i.e. or . When the number of local features changes from to , the computation time changes little. Recall that when , Alg. 1 is the same as centralized algorithms. It means that for the practical use, where we recommend setting the number of local features around the sample size, the computation cost is similar to that of centralized algorithms. In Fig. 2 (b), we fix but change the sample size, showing the error rate decay when the sample size increases. It also shows the growth trend of computation time is similar to .

For the non-linear case, we use Dataset GSE2187 (, ). We set and the width of the RBF kernel . The result is reported in Fig. 3, which indicates that both the tendency of computation cost and error rate in (a) are the same as that of linear kernels. While in (b), the error rate shows little trend as the sample size increases.

(a)
(b)
Fig. 2: The log of the mean error (left, blue) and the mean running time (right, red) of Alg. 1 with respect to (a) the number of local features when and . (b) the number of samples when and . Simulation data and linear kernel are used.
(a)
(b)
Fig. 3: The log of the mean error (left, blue) and the mean running time (right, red) of Alg. 1 with respect to (a) the number of features in local when and . (b) the number of samples when and . The data are uniformly selected from dataset GSE2187 at the NIE GEO. RBF kernel is used with .
(a)
(b)
Fig. 4: The log of the mean error of Alg. 1 (the dotted line) and DPCA (the solid lines) when and (a) (b) . The number of neighbors in DPCA is , denoted as DPCA3, DPCA5 and DPCA10. Simulation data and the linear kernel are used.
Number
of Features
1 5 10 20 50 100 150 200
Toxicant vs Fibrate Alg. 1 0.36750.0383 0.03230.0218 0.01640.0128 0.01500.0118 0.01480.0111 0.01450.0110 0.01480.0115 0.01480.0115
KPCA 0.36750.0383 0.02590.0151 0.01730.0124 0.01480.0111 0.01500.0111 0.01450.0110 0.01480.0115 0.01480.0115
PCA 0.50480.0932 0.20090.1121 0.03520.0517 0.02590.0551 0.02770.0702 0.02640.0587 0.02140.0527 0.02430.0562
Toxicant vs Azole Alg. 1 0.47900.0410 0.39660.0305 0.27780.0361 0.19070.0376 0.14160.0331 0.10990.0283 0.10190.0271 0.09690.0259
KPCA 0.48890.0350 0.39120.0341 0.27780.0379 0.18440.0341 0.13910.0325 0.11020.0283 0.10260.0272 0.09690.0259
PCA 0.50190.0616 0.51240.0415 0.46550.0620 0.37810.0782 0.22070.1309 0.19820.1502 0.22120.1581 0.21340.1671
Toxicant vs Others Alg. 1 0.33410.0148 0.33260.0202 0.24770.0209 0.20040.0231 0.14990.0197 0.14280.0169 0.14090.0164 0.14100.0172
KPCA 0.33410.0148 0.32840.0267 0.24200.0229 0.19730.0207 0.14880.0190 0.14300.0176 0.14080.0168 0.14100.0172
PCA 0.45780.1367 0.49280.0992 0.41290.0763 0.35260.0888 0.20450.1324 0.19530.1613 0.20690.1523 0.20770.1584
TABLE I: Classify error rate on GSE2187 dataset using L-SVM with Alg. 1, KPCA and PCA.

V-B Comparison with DPCA in linear case

In this subsection, we compare the proposed Alg. 1 with DPCA [12], a state-of-the-art distributed PCA in the vertical regime, which however can only deal with linear problem. Hence, we only compare the performance of Alg. 1 (DKPCA) with DPCA in linear cases. Simulation data are used here, where and and the rank of is . The first diagonal elements of are , where we change to control the eigengap and see how accuracy changes with different data.

DPCA solves PCA in a decentralized setting and the number of neighbors of local machines influences the result significantly. We set this number as and denote the corresponding result as DPCA3, DPCA5, and DPCA10 in Fig. 4, respectively. In DPCA, a coordinate descent method is used with ADMM cycle inside. Following the experiment setting in [12], we set the parameter of ADMM as and the maximum number of the inner ADMM iterations as . The following error metric is used to measure the accuracy of principal subspace estimation,

where is the ground truth and is its estimation.

Fig. 4 shows the log of mean error of DKPCA and DPCA with respect to the iteration number. DKPCA is in a one-shot manner, thus this error is independent of the iteration number. The eigengap in Fig. 4 (a) is bigger than that in (b). Hence, DKPCA performs better on (a), which coincides with our theory analysis in section IV. Though the accurate of DPCA changes little with respect to eigengap, the converge speed are significant affected. Note that in each outer iteration of DPCA, the inner ADMM iterates times with communication rounds. Hence, from this point of view, DKPCA is more economy in communication and computation cost.

V-C Comparison between distributed and full sample KPCA in real dataset

The aim of PCA is to keep useful information during data projection and thus its performance could be observed in a post learning task on the projected data. In this subsection, we first map data into a low-dimension feature space by Alg. 1

(DKPCA), the centralized kernel algorithm (KPCA), or the centralized linear algorithm (PCA). Then we sent them to a linear support vector machine (L-SVM). Dataset GSE2187 provides classification tasks: toxicants vs fibrates (

), toxicants vs azoles (), and toxicants vs others (). We randomly choose data as the training set and use the rest for test. The RBF kernel width .

The average classification error and its standard deviation over 50 trials are reported in Table 

I. As methods on full data, it could be expected that the classification performance based on KPCA is better than the proposed DKPCA, and meanwhile PCA is better than all the existing distributed algorithms for linear cases. From Table I, one could observe that the performance of DKPCA is approaching KPCA and is generally better than PCA, showing the benefit of extending distributed PCA from linear to nonlinear for vertical regime.

Vi Conclusion

This brief introduces a communication-efficient privacy-preserving algorithm for distributed kernel PCA in vertical regime, which estimates the underlying kernel matrix from the eigenvectors of local kernel matrices. The theoretical analysis of the approximation error shows that the proposed algorithm gives high quality results with low communication cost if eigenvalues decay fast. As an one-shot method, the proposed algorithm scarifies the computation efficiency for communication efficiency. But both theoretical and experimental result show that when the number of local machines falls in a suitable range, the computation cost is similar to that of centralized algorithms. Experiments on real dataset GSE2187 also verify the effectiveness of the proposed method in practical.

References

  • [1] H. Zou, T. Hastie, and R. Tibshirani, “Sparse principal component analysis,” Journal of computational and graphical statistics, vol. 15, no. 2, pp. 265–286, 2006.
  • [2] E. J. Candès, X. Li, Y. Ma, and J. Wright, “Robust principal component analysis?” Journal of the ACM (JACM), vol. 58, no. 3, pp. 1–37, 2011.
  • [3] S. Mika, B. Schölkopf, A. J. Smola, K.-R. Müller, M. Scholz, and G. Rätsch, “Kernel pca and de-noising in feature spaces,” in Advances in neural information processing systems, 1999, pp. 536–542.
  • [4] L. Zhang, W. Dong, D. Zhang, and G. Shi, “Two-stage image denoising by principal component analysis with local pixel grouping,” Pattern Recognition, vol. 43, no. 4, pp. 1531–1549, 2010.
  • [5] C. Ding, D. Zhou, X. He, and H. Zha, “-pca: rotational invariant l 1-norm principal component analysis for robust subspace factorization,” in Proceedings of the 23rd international conference on Machine learning, 2006, pp. 281–288.
  • [6] R. Rosipal, M. Girolami, L. J. Trejo, and A. Cichocki, “Kernel pca for feature extraction and de-noising in nonlinear regression,” Neural Computing & Applications, vol. 10, no. 3, pp. 231–243, 2001.
  • [7] J. Goes, T. Zhang, R. Arora, and G. Lerman, “Robust stochastic principal component analysis,” in Artificial Intelligence and Statistics, 2014, pp. 266–274.
  • [8]

    Y. Zhang, J. Duchi, and M. Wainwright, “Divide and conquer kernel ridge regression,” in

    Conference on learning theory, 2013, pp. 592–617.
  • [9] J. D. Lee, Q. Liu, Y. Sun, and J. Taylor, “Communication-efficient sparse regression,” Journal of Machine Learning Research, vol. 18, no. 5, pp. 115–144, 2017.
  • [10] Q. Yang, Y. Liu, T. Chen, and Y. Tong, “Federated machine learning: Concept and applications,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1–19, 2019.
  • [11] J. Fan, D. Wang, K. Wang, and Z. Zhu, “Distributed estimation of principal eigenspaces,” Annals of statistics, vol. 47, no. 6, p. 3009, 2019.
  • [12] I. D. Schizas and A. Aduroja, “A distributed framework for dimensionality reduction and denoising,” IEEE Transactions on Signal Processing, vol. 63, no. 23, pp. 6379–6394, 2015.
  • [13] A. Bertrand and M. Moonen, “Distributed adaptive estimation of covariance matrix eigenvectors in wireless sensor networks with application to distributed pca,” Signal Processing, vol. 104, pp. 120–135, 2014.
  • [14] J. Ge, Z. Wang, M. Wang, and H. Liu, “Minimax-optimal privacy-preserving sparse pca in distributed systems,” in International Conference on Artificial Intelligence and Statistics, 2018, pp. 1589–1598.
  • [15] D. Garber, O. Shamir, and N. Srebro, “Communication-efficient algorithms for distributed stochastic principal component analysis,” in Proceedings of the 34th International Conference on Machine Learning-Volume 70.   JMLR. org, 2017, pp. 1203–1212.
  • [16]

    O. Ghorbel, M. W. Jmal, M. Abid, and H. Snoussi, “Distributed and efficient one-class outliers detection classifier in wireless sensors networks,” in

    International Conference on Wired/Wireless Internet Communication.   Springer, 2015, pp. 259–273.
  • [17] Y.-A. Le Borgne, S. Raybaud, and G. Bontempi, “Distributed principal component analysis for wireless sensor networks,” Sensors, vol. 8, no. 8, pp. 4821–4850, 2008.
  • [18] H. Kargupta, W. Huang, K. Sivakumar, and E. Johnson, “Distributed clustering using collective principal component analysis,” Knowledge and Information Systems, vol. 3, no. 4, pp. 422–448, 2001.
  • [19] H. B. McMahan, E. Moore, D. Ramage, S. Hampson et al., “Communication-efficient learning of deep networks from decentralized data,” arXiv preprint arXiv:1602.05629, 2016.
  • [20] A. Scaglione, R. Pagliari, and H. Krim, “The decentralized estimation of the sample covariance,” in 2008 42nd Asilomar Conference on Signals, Systems and Computers.   IEEE, 2008, pp. 1722–1726.
  • [21] L. Li, A. Scaglione, and J. H. Manton, “Distributed principal subspace estimation in wireless sensor networks,” IEEE Journal of Selected Topics in Signal Processing, vol. 5, no. 4, pp. 725–738, 2011.
  • [22]

    R. Rosipal and M. Girolami, “An expectation-maximization approach to nonlinear component analysis,”

    Neural Computation, vol. 13, no. 3, pp. 505–510, 2001.
  • [23] M. F. Balcan, Y. Liang, L. Song, D. Woodruff, and B. Xie, “Communication efficient distributed kernel principal component analysis,” in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 725–734.
  • [24] Y. Yu, T. Wang, and R. J. Samworth, “A useful variant of the davis–kahan theorem for statisticians,” Biometrika, vol. 102, no. 2, pp. 315–323, 2015.
  • [25] R. A. Horn and C. R. Johnson, Topics in matrix analysis.   Cambridge University Press, 1991.

Appendix A

A-a Proof of Lemma 2

In this section, we present the proof of Lemma 2, where the following lemmas on the Hadamard product of positive semidefine matrices [25] are needed.

Lemma 4.

If are two positive semidefinite matrices, then so is .

Lemma 5.

Let are two positive semidefinite matrices, any eigenvalue of satisfies

Now we are at the stage of proofing Lemma 2.

Proof.

Recall that in the fusion center, local kernel matrices are represented by their first eigenvectors. Therefore, we first estimate the approximate error in local. In machine , denotes the first eigenvectors of with the corresponding eigenvalues , which is a diagonal matrix with diagonal elements equal to . Then the local approximation error satisfies

(18)

When linear kernel is used, from (5), we have

(19)

Then, we focus on the RBF kernel. For convenience, we use to denote . Recall the definition of , we know that both and are positive semidefinite and .

We define matrices for as follows

According to Lemma 4 and Lemma 5, it holds that is positive semidefine and satisfies

(20)

because and .

We decomposite as follows

Note that when is a positive semidefinite matrix. Combined with (20), we get Lemma 2. ∎