Non-asymptotic Optimal Prediction Error for RKHS-based Partially Functional Linear Models

by   Xiaoyu Lei, et al.

Under the framework of reproducing kernel Hilbert space (RKHS), we consider the penalized least-squares of the partially functional linear models (PFLM), whose predictor contains both functional and traditional multivariate part, and the multivariate part allows a divergent number of parameters. From the non-asymptotic point of view, we focus on the rate-optimal upper and lower bounds of the prediction error. An exact upper bound for the excess prediction risk is shown in a non-asymptotic form under a more general assumption known as the effective dimension to the model, by which we also show the prediction consistency when the number of multivariate covariates p slightly increases with the sample size n. Our new finding implies a trade-off between the number of non-functional predictors and the effective dimension of the kernel principal components to ensure the prediction consistency in the increasing-dimensional setting. The analysis in our proof hinges on the spectral condition of the sandwich operator of the covariance operator and the reproducing kernel, and on the concentration inequalities for the random elements in Hilbert space. Finally, we derive the non-asymptotic minimax lower bound under the regularity assumption of Kullback-Leibler divergence of the models.



There are no comments yet.


page 1

page 2

page 3

page 4


Exact minimax risk for linear least squares, and the lower tail of sample covariance matrices

The first part of this paper is devoted to the decision-theoretic analys...

A new upper bound for sampling numbers

We provide a new upper bound for sampling numbers (g_n)_n∈ℕ associated t...

Principal components analysis of regularly varying functions

The paper is concerned with asymptotic properties of the principal compo...

Sampled forms of functional PCA in reproducing kernel Hilbert spaces

We consider the sampling problem for functional PCA (fPCA), where the si...

Parametric Models Analysed with Linear Maps

Parametric entities appear in many contexts, be it in optimisation, cont...

Optimal policy evaluation using kernel-based temporal difference methods

We study methods based on reproducing kernel Hilbert spaces for estimati...

Fast learning rate of deep learning via a kernel perspective

We develop a new theoretical framework to analyze the generalization err...
This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.