Mode decomposition has been an active and important topic in science and engineering STRICHARTZ198951; tensor_decomp_ml; tensor_kolda; H_Tu_2014; wold1987principal; hyvarinen2000independent; dict_learning; nmf
. The main goal of mode decomposition is to find a set of bases that can represent a group of target vectors or functions in a compact form. Mode decomposition methods can be classified into two categories: model-based methodsSTRICHARTZ198951; tensor_decomp_ml; tensor_kolda and data-driven methods (unsupervised learning) H_Tu_2014; wold1987principal; hyvarinen2000independent; dict_learning; nmf.
Model-based methods have long been recognized as rooting deeply in physics physics_emd; dmd_physics, e.g. spherical harmonic functions are eigenstates of a quantum free particle on a sphere carinena2011quantum. In terms of data-driven models, there are also intriguing connections drawn between unsupervised learning and physics fischer2012introduction; Lin_2017; Han_2018; born_machine; nguyen2017inverse; mehta2014exact
, e.g. researchers have discovered equivalence between Boltzmann machine and statistical physicsnguyen2017inverse; mehta2014exact.
We motivate this work by revealing the limitation of PCA for spatial Gaussian process (GP), which is useful in various fields neal1997monte; pen1997generating; novak2014determining; gp_robot; gramacy2008bayesian, e.g. modelling of the big bang pen1997generating. It is worthy of mentioning that PCA could be implemented as a fast algorithm with high accuracy, based on incomplete Cholesky factorization and the low-rank approximation of covariance functions/ Green functions, see for example schafer2020sparse. We attempt to build a novel connection between principal component analysis (PCA) and Schrödinger equation (SE). Based on this mapping, we propose a novel algorithm called Schrödinger principal component analysis (Schrödinger PCA). This method could be viewed as a decoupled substitution of the vanilla PCA. Namely, we use second-order elliptic operators, i.e., the steady-state Schrödinger equation as a second-order approximation to the covariance kernel required in the PCA framework. Information of the small-scale fluctuations of the field is well approximated by the elliptic operator, and characteristics of the modes are captured by obtaining the eigenstates and energies of the Schrödinger equation in a decoupled manner. The key contributions of this paper are highlighted as follows:
We identify the interesting connection between PCA and Schrödinger equation as a second-order approximation of the covariance operator, which can be a bridge that connects the machine learning community and computational physics community.
The proposed Schrödinger PCA algorithm requires much less number of spatial anchor points than PCA, and computes only variance of GP at each anchor point. Consequently, it is much more sample efficient and computationally cheaper than PCA and other covariance-based methods.
Although the motivation of Schrödinger PCA algorithm is based on the theory of Gaussian process in the Euclidean space, Schrödinger PCA is also applicable to solve models with other types of local correlation (e.g. exponential, as in the Appendix B and C) and on various types of manifolds or graphs (please see the climate example in section 4).
This paper is organized as follows: In section 2, we define notations and provide a brief introduction to Gaussian process, principal component analysis and Schrödinger equation. In section 3, a novel connection is drawn between PCA and Schrödinger equation and we propose the Schrödinger principal component analysis (Schrödinger PCA) algorithm. In section 4, we demonstrate the effectiveness of Schrödinger PCA over PCA via a two-dimensional Gaussian process and a global climate model, followed by conclusions in section 5.
2 Background and Problem setting
2.1 Gaussian Process (GP)
A Gaussian process is a stochastic process where any collection of random variables
has a multivariate Gaussian distribution, whereis a stochastic function, which we refer as field below. In particular, two-point covariance and one-point variance of GP are:
where the Gaussian term in covariance is identified as the correlation kernel, and the positive definite matrix captures the spatial Gaussian correlation structure. Without loss of generality, we have assumed 111Principal component analysis will remove the mean value before solving the eigen-problem., and means averages over the probabilistic distribution or many realizations of GP. In the special case where ,
characterizes the correlation length of GP and can often physically ‘quantized’ as a microscopic degree of freedom, e.g. a particlemiller2007glauber, as shown in Fig 1(a). A large class of spatial data can be modeled with Gaussian process neal1997monte; pen1997generating; novak2014determining; gp_robot; gramacy2008bayesian, in that Gaussian process captures two key features of a macroscopic fluctuating system: statistical fluctuations and local correlations.
In practice, only a finite set of anchor points are available for field measurements (Figure 1 (b)). Required by the Nyquist sampling theorem landau1967sampling, two neighboring anchor points should be close enough (e.g. distance ) to make sure the covariance is significant enough to reveal local correlation. If the whole system has size , the number of samples required scales as which diverges as (1) (multi-scale) or (2)
(curse of dimensionality). In this paper, we aim to resolve the multi-scale issue.
2.2 Principal Component Analysis (PCA)
PCA plays an important role in dimensionality reduction, pattern recognition, and partial differential equations (PDE)zhang2016big; hoffmann2007kernel; perlibakas2004distance; ding2004k; owhadi2015bayesian. The key idea of PCA is to find eigenmodes that maximize variances of features among data samples. For GP described by Eq. (1), eigenmodes should satisfy the eigen equation and orthogonality constraints:
where eigenvaluesare ordered as . The non-negative eigenvalues and existence of orthogonal eigenmodes are due to the positive definiteness of .
Remark: Why do we need PCA for Gaussian Process ?
We would like to provide the physical motivation of applying PCA to Gaussian process. As illustrated in Figure 1, many particles (wave packets) are superposed to form a spatially-correlated density distribution, which can be modeled as a Gaussian process. For a huge fluctuating system with a large number of particles, physicists are only interested in statistical properties or collective behavior rather than all microscopic details. This motivates a line of works in physics aiming to identify collective modes (or ‘quasi-particles’) in a wide range of systems with the help of PCA liu2019principal; zh_pca_1; wang2016discovering; huang2019modelling. In a word, Gaussian process and PCA are the microscopic and macroscopic descriptions of spatial data respectively, so applying PCA to GP is equivalent to transforming local features to global features. Besides physics, Gaussian process is a general tool to model spatial correlated data neal1997monte; pen1997generating; novak2014determining; gp_robot; gramacy2008bayesian, and collective modes extracted from data can help either (1) data compression or (2) data interpretation.
However, because the success of PCA relies on accurate estimation of covariance among anchor points, a large number of anchor points are required, as discussed in section 2.1. To resolve this, in section 3 we transform the PCA problem to a Schrödinger equation, which can be solved accurately with the information from much less number of anchor points. The rationale of our method is inspired by a good identification of the underlying Gaussian field and its associated scales. Since we are mainly concerned with the case when is relatively small, we shall firstly extrapolate information of the Gaussian correlation kernel: namely we obtain a good approximation of , either by the ground truth data, when the exact values of are available; or by local approximation, where we use locally distributed anchor points to infer . Then we shall extract the information of the covariance field based on the information of at the small number of anchor points combined with our inferred , instead of resorting to the whole PCA of the covariance kernel. By invoking a local second-order approximation of steady-state Schrödinger equation, we can obtain a good approximation of the eigenvalues and eigenmodes by solving eigenproblem.
2.3 Schröinger Equation
The Schrödinger equation (SE) griffiths2018introduction uses a wave function to characterize a quantum particle. The steady-state SE describes a particle being trapped in the potential well with constant energy :
where the positive definite matrix is the inverse mass matrix of the particle, is the Laplacian operator meaning . The first term and the second term correspond to kinetic and potential energy of a particle, respectively. Because is an Hermitian (self-conjugate) operator, we have a complete set of real eigen-states that satisfy:
where and are the -th eigenstate and eigenenergy, respectively.
3 Schrödinger Principal Component Analysis
In this subsection, we would like to point out the correspondence between PCA and SE given some mild assumptions. To be precise, we shall establish suitable SE as second-order approximations to the covariance operator necessitated in the PCA framework. Comparing Eq. (2) and (4), similar structures can be observed, and intuitively the largest eigenvalues in the PCA method could be related to the first few energy states in the Schrödinger equation by a minus sign. Still it is not a one-to-one mapping because correlates any two points and , while contains the Laplacian operator which only connects to its neighborhood. We aim to establish this second-order decoupled approximation in Theorem 1.
The PCA problem described by Eq. (2) can be approximated by the Schrödinger problem described by Eq. (4), i.e. two systems have the same equation hence same eigenvalues and eigenmodes by a (possible) minus sign 222The eigenvalues of PCA and SE match each other by a minus sign. In terms of eigenmodes, there is a possible minus sign as in most eigen-problem. , up to a second-order approximation provided that these quantities are equal:
We summarize the key idea of the proof here, and details can be found in Appendix A. We characterize operator by its Rayleigh quotient:
The rayleigh quotient of PCA and SE are defined as and respectively:
(1) The only approximation made in the proof of Theorem 1 lies in Eq. (15) where and are taylor expanded to first-derivative and second-derivative, respectively. The underlying assumption is that (a) the correlation lengths (eigenvalues of ) should be much smaller than the size of whole integral domain, so that the exponential factor decays fast enough before Taylor expansion fails; (b) the variation of is milder than variation of , or .
(2) Although we have assumed
as Gaussian, our proof also works for any function that can be decomposed into a linear combination of Gaussian radial basis function. The equivalence between PCA and SE remains valid when the covariance functionis of the form of an exponential forcing (please refer to Appendix B and C), which is reminiscent of many condensed matter systems in physics, e.g. Ising model.
(3) The corresponding relations Eq. (5) are particularly interesting in terms of physics. Normally one locates the system at disposal on a compact support , i.e. for while for , implies that the quantum particle is trapped in a potential well on . Further, if can be approximated as a quadratic form around the global maxima, the Schrödinger equation is then readily identified as a well-studied system in physics: the quantum harmonic oscillator, the properties of which are summarized in Appendix E.
As mentioned in section 2, spatial mode decomposition methods in the market (e.g. PCA) require a large number of anchor points and/or realizations (samples) because they rely heavily on the covariance information among different spatial points. The procedure of PCA for mode decomposition is shown in Alg. 1
Instead of trying to detect local correlation with densely distributed anchor points, we bypass this issue by transforming the PCA problem to a Schrödinger equation which can be constructed with an extremely small number of anchor points. We propose our method as Schrödinger PCA and the key steps are summarized here (also in Alg. 2): (1) compute only variance (no need for covariance) of fields at each anchor point; (2) leveraging Eq. (5) and interpolation to obtain the continuous potential function and inverse mass matrix ; (3) Solving the Schrodinger equation (Eq. (3)) with finite element method.
It is worth emphasizing the key factor that contributes to the success of Schrödinger PCA lies in scale separation, i.e. have smaller scales than such that . While PCA requires anchor points to characterize (small scale), Schrödinger PCA only requires anchor points to characterize (large scale). For this scale separation to work, Schrödinger PCA requires extra estimation of the small-scale correlation kernel in order to determine the coefficients in the Schrödinger equation, but this is cheap because either (1) the correlation kernel can be derived from some known microscopic mechanisms; or (2) we can add a few detector points locally around each anchor point to estimate local correlation kernel. In the isotropic case , only one detector point is required for each anchor point so the extra computational costs can be ignored. To simplify, we choose and assume is known a priori in the following experiment section.
4 Numerical Experiments
In all the following numerical experiments, we shall use the eigenvalues and eigenmodes obtained from the PCA method on the fine grid as the ground truth and reference, and establish comparisons between various methods, in terms of the eigenvalues and eigenmodes computed.
Firstly, we use the example of a two-dimensional Gaussian process to test the correctness of the correspondence between PCA and SE stated in Theorem 1. We find PCA and Schrödinger PCA have similar eigenvalues and eigenmodes for the fine grid (i.e. a large number of anchor points), but PCA fails for the coarse grid (i.e. a small number of anchor points) while Schrödinger PCA can still obtain accurate eigenvalues and eigenmodes. Secondly, we apply our method to a sphere where the global climate is simulated. The climate modes discovered by Schrödinger PCA admits a nice physical interpretation, which demonstrates the potential of our method to attack physics or real-life problems on graphs and manifolds other than the Euclidean space. Numerical experiments with the exponential kernel can also be found in Appendix C. Codes are available at https://github.com/KindXiaoming/schrodinger-pca.
4.1 Two-dimensional Gaussian Process
The two-dimensional Gaussian process lies on the grid [-50,50][-50,50] (size ). The field generation process can be find in Appendix D. The generated profiles are denoted as where and index to distinguish among different realizations. The field satisfies these statistical properties:
4.1.1 Oversampling regime with fine grids
In this part, we use numerical results to verify the equivalence between PCA and Schröginer equation, as pointed out in Theorem 1.
PCA (fine grid): We use all points as anchor points and generate 40000 realizations of GP satisfying Eq. (9). We apply PCA to obtain the first eigenmodes and corresponding eigenvalues. The distance of two neighboring anchor points is 1 (smaller than ). In this sense, the anchor points oversample the Gaussian process, so covariance information among different features can be captured.
SE (fine grid): We use all 10201 anchor points to evaluate to obtain the potential function and inverse mass matrix as indicated in Theorem 1, then the Schrodinger equation Eq. (3) is discretized on the same grid and solved with finite difference method. We collect first eigenstates and eigenenergies. These eigenenergies are negative and are negated below to compare with PCA results.
SE (harmonic approximation): The potential function has a quadratic form that is reminiscent of the harmonic oscillator in quantum physics. Although depends explicitly on it can be nicely approximated as for ground states and low excited states (eigenstates with low energies) because the wavefunction centered around where the potential energy is global minima. As soon as the system is identified as a harmonic oscillator, the eigenenergies and eigenstates are immediately accessible via analytical analysis, detailed in Appendix E.
As shown in Fig. 2, three methods listed above agree well in terms of eigenvalues for first modes. For modes, the staircase degeneracy structure still remains the same for all methods, while the values have deviations , which is because the approximation becomes poorer for states with higher eneriges. The staircase degeneracy structure can be elegantly understood by the energy spectrum of 2-d harmonic oscillator: an energy level with quantum number admits ways to choose two integers such that . Likewise eigenmodes for all methods are illustrated and compared in Fig. 3 and they show similar behavior for three aforementioned methods.
4.1.2 Undersampling regime with coarse grids
In the above oversampling regime, i.e. the number of anchor points is large enough to capture spatial correlation, eigenmodes and eigenvalues obtained from PCA and Schrödinger PCA show great similarities. However, when the number of anchor points decreases, PCA will fail in terms of both eigenvalues and eigenmodes; by contrast, Schrödinger PCA remains quite robust and behaves nearly the same with the Schrödinger PCA in the oversampling regime, as illustrated in Figure 2 and 3.
PCA (coarse grid 1): We set anchor points on the uniform coarse grid (121 anchor points in total). The same 40000 GP realizations are used as in the oversampling test, but only the field values on the coarse grid are available. We apply PCA to obtain eigenmodes and eigenvalues defined on the coarse grid, and interpolate the eigenmodes back to the fine grid.
PCA (coarse grid 2): Different from PCA (coarse grid 1) where we implement PCA before interpolation, here we implement interpolation before PCA.
SE (coarse grid): We evaluated at anchor points (coarse grid) and interpolate back to the fine grid. Correspondingly the values of and can be determined on the fine grid based on Eq. (5). Finally the SE is solved with finite difference method on the fine grid.
4.2 Global Climate Modes
In this example, we demonstrate the effectiveness of the proposed Schrödinger PCA to solve a mode decomposition problem on a unit sphere .
Our earth has various kinds of climates, depending on longitude and latitude of the location. In the following, we use spherical coordinate and refers to any climate-related scalar field (e.g. temperature). Both fluctuation magnitude and (local) correlation length depends explicitly on : (1) is small for a place where it is like spring all the year around, while large for another place with scorching summers and cold winters; (2) is large in oceans due to ocean circulations, while small on a land without any winds.
We consider two “earths": isotropic earth (below ) and anisotropic earth ():
To discretize the Laplacian oeprator on a graph, we utilze icosahedron mesh which contains 2562 vertices, 7680 edges and 5120 faces. Here is the (averaged) length of all edges. The details of generating the mesh can be found in Apppendix F. We repalce the Laplacian operator in Eq.(3) with the graph Laplacian matrix and obtain the eigenmodes on the graph. Finally we interpolate eigenmodes back to the sphere and show them (top view) in Fig. 4. Here refers to the index of eigenstates, ordered from lowest energy to highest energy. The isotropic case provides a baseline for spherical function decomposition, and in fact they correspond to spherical harmonic functions. We observe that eigenmodes of the anisotropic earth can shed light on the the fluctuation pattern: (1) Fluctuations are large aournd the poles, so the first few patterns () only concentrate around poles; (2) intermediate patterns () are particularly interesting because such patterns have similar magnitude around the pole and the equator, but finer structure is observed around the pole revealing that the correlation length is smaller around the pole than the equator; (3) the last few patterns captures local fluctuations around the equator, not revealing collective behavior of global temperature 333Although we do not provide PCA results here, the averaged edge length is much greater than the (averaged) correlation length , which in principle prevents PCA from working..
In this paper, we first point out the limitation of PCA to tackle spatial data with many scales, e.g. the local correlation length of Gaussian process (small scale) is much smaller than the size of the whole system (large scale). While PCA requires anchor points to be dense enough to characterize the covariance information in the small scale, we perform a second-order approximation to PCA by a steady-state Schrödinger equation which only requires variance information in the large scale and estimation of the small scale (correlation matrix or correlation length ). Therefore, our proposed algorithm is not only accurate in the sense that it can capture local small-scale information, but also efficient and computationally cheaper, since associated eigenproblems of the Schrödinger equation only requires information of pointwise variances instead of covariances in a decoupled manner. After the mapping from PCA to Schrodinger equation is built in section 3, we propose the Schrödinger PCA algorithm, which is then applied to a 2-dimensional Gaussian process and a global climate model on a sphere. Numerical experiments testify that the proposed Schrödinger PCA algorithm can obtain accurate eigenvalues and eigenmodes with 100 fewer anchor points required by PCA. Furthermore, we push the method beyond the Gaussian setting, and obtain theories with corroborating experiments in other physically interesting cases like an exponential correlation (please refer to Appendix B and C).
In the future, we would like to extend Schrödinger PCA algorithm to solve higher-dimensional problems, or problems on graphs and Riemannian manifolds, with the help of state-of-the-art Schrodinger solvers. Beyond this second-order approximation of the covariance operator, we also aim to use higher-order approximations to design other fast PCA algorithms motivated by higher-order elliptic equations.
Metrics for AI systems
: For unsupervised learning tasks, different metrics seem controversial: accuracy (reconstruction error) and interpretability, depending on the goals of one’s task. Take auto-encoder for example: an over-parameterized network can ha ve low reconstruction error on the dataset, but its interpretability is probably low. The trade-off between accuracy and interpretability in general AI systems needs to be studied in detail. Although it is still unclear the definition of ‘interpretability’ for AI systems, it sounds promising to start from problems in physics which are believed to be interpretable. In this paper, we did not calculate the reconstruction error, but rather compare the obtained modes with the solutions of the Schrodinger equation to achieve interpretability.
Societal implications: As this paper and others aiming to solve multi-scale problems, the key ideas of them can shed light on distributed computing or privacy: Local agents are only responsible to collect local data, process the data and then hand in related data to higher-level agents. All local agents can work in a parallel way (distributed computing) and higher-level agents do not know the exact information collected by local agents (privacy).
We would like to thank Zhihan Li, Yifan Chen, and Houman Owhadi for valuable discussions. We also thank Huichao Song for organizing discussions during which this project is motivated and initiated.
Appendix A Proof of Theorem 1
From Eq. (16) to (17), and
are Taylor expanded to first order and second order respectively. From Eq. (17) to (18), odd terms ofvanish due to symmetry. From Eq. (19) to (20), we leverage the Gaussian integral:
We insert the trace term back to Eq. (7) and invoke integration by parts:
After integration by parts, Eq. (4) now becomes
Appendix B Generalization of Theorem 1 to Exponential Correlations
In the physics community, we are often concerned with covariance fields of the following type:
where is a positive definite function, is the area surface of the unit sphere in , and is the normalization factor of the covariance function.
Similar to the Gaussian case, we shall use Schödinger equation to perform PCA approximations. Here we identify when the PCA problem described by Eq. (2) can be well approximated by the Schrödinger problem described by Eq. (4) as in Theorem 1.
From Eq. (30) to (31), and are Taylor expanded to first order and second order respectively. From Eq. (31) to (32), odd terms of vanish due to symmetry. From Eq. (33) to (34), we leverage the Exponential integral:
We insert the trace term back to Eq. (7) and invoke integration by parts:
After integration by parts, Eq. (4) now becomes
Appendix C Experimental results for Exponential kernel
Appendix D Generation of Gaussian process
In this section, we will discuss the generation of the rectangular Gaussian process (GP) in our numerical experiments. We simply set in Eq. (1).
The generation process contains two stages: (1) we add Gaussian filtering to match the correlation kernel, i.e. in Eq. (1). (2) We re-scale the field dependent on to match the variance and magnitude of covariance in Eq. (1).
d.1 Covariance Generation
Since the two-dimensional GP lies on grid(size ), we start from generating a
matrix with each component independently sampled from a standard normal distribution.
As we mentioned, an isotropic correlation length is introduced in our experiment. The width of the Gaussian filter, denoted as , is set to match the correlation length . It is easy to see that .
For the standard Gaussian filtering algorithm, a truncation along each direction is introduced for efficient calculation. In our numerical experiment, we set this truncation level at 444This truncation results in a relative error level..
Here the is the step function.
With this truncation, the Gaussian filtering can be achieved by convolution with a Gaussian kernel .
d.2 Modifying magnitudes
Then, due to the scaling property of gaussian distribution, we design a window function to modulate the original GP. Field values at point are multiplied by .
d.3 The overall algorithm
Based on the above discussion, the overall algorithm to generate number of samples GP with pre-defined variance and the gaussian-like covariance with pre-defined isotropic correlation length is as Algorithm 3.
Appendix E Properties of quantum harmonic oscillator
The standard equation (in physics) of a 1-dimensional harmonic oscillator is written as555We have set the planck consant .:
whose eigenstates and eigenenergies are:
where are Hermite polynomials. For the 2-dimensional harmonic oscillator in our case, it can be decoupled into two independent oscillators along and respectively. The total energy is equal to the sum of energy in both directions , and the total wavefunction is equal to the product of wavefunction in both directions .
Appendix F Generation of Icosahedron mesh
There are many ways to build meshes on a sphere: UV sphere, normalized cube, spherified cube and icosahedron. UV sphere has singularities around two poles, and normalized cube and spherified cube do not sustain enough rational symmetry. Given the above considerations, we choose icosahedron mesh to discretize the sphere.
Our mesh starts from an icosahedron (20 faces, 30 edges and 12 vertices). Each face of icosahedron is an equidistant triangle. Subdivision is to partition one triangle into four smaller equidistant triangles, as shown in Figure 7. Then middle points are re-scaled to project to the surface of the sphere. Larger number of subdivisions generates finer meshes, shown in Figure 8.
After the icosahedron mesh is generated, corresponding degree matrix , adjacency matrix and Laplacian matrix can be computed. In the global climate example (section 4.2), we replace the Laplacian operator with the Laplacian matrix in the Schrodinger equation, i.e.
where is the averaged length of edges.