Principal Separable Component Analysis via the Partial Inner Product

07/23/2020 ∙ by Tomas Masak, et al. ∙ 0

The non-parametric estimation of covariance lies at the heart of functional data analysis, whether for curve or surface-valued data. The case of a two-dimensional domain poses both statistical and computational challenges, which are typically alleviated by assuming separability. However, separability is often questionable, sometimes even demonstrably inadequate. We propose a framework for the analysis of covariance operators of random surfaces that generalises separability, while retaining its major advantages. Our approach is based on the additive decomposition of the covariance into a series of separable components. The decomposition is valid for any covariance over a two-dimensional domain. Leveraging the key notion of the partial inner product, we generalise the power iteration method to general Hilbert spaces and show how the aforementioned decomposition can be efficiently constructed in practice. Truncation of the decomposition and retention of the principal separable components automatically induces a non-parametric estimator of the covariance, whose parsimony is dictated by the truncation level. The resulting estimator can be calculated, stored and manipulated with little computational overhead relative to separability. The framework and estimation method are genuinely non-parametric, since the considered decomposition holds for any covariance. Consistency and rates of convergence are derived under mild regularity assumptions, illustrating the trade-off between bias and variance regulated by the truncation level. The merits and practical performance of the proposed methodology are demonstrated in a comprehensive simulation study.



There are no comments yet.


page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.