Fast Convergence on Perfect Classification for Functional Data

04/07/2021
by   Tomoya Wakayama, et al.
0

In this study, we investigate the availability of approaching to perfect classification on functional data with finite samples. The seminal work (Delaigle and Hall (2012)) showed that classification on functional data is easier to define on a perfect classifier than on finite-dimensional data. This result is based on their finding that a sufficient condition for the existence of a perfect classifier, named a Delaigle–Hall (DH) condition, is only available for functional data. However, there is a danger that a large sample size is required to achieve the perfect classification even though the DH condition holds because a convergence of misclassification errors of functional data is significantly slow. Specifically, a minimax rate of the convergence of errors with functional data has a logarithm order in the sample size. This study solves this complication by proving that the DH condition also achieves fast convergence of the misclassification error in sample size. Therefore, we study a classifier with empirical risk minimization using reproducing kernel Hilbert space (RKHS) and analyse its convergence rate under the DH condition. The result shows that the convergence speed of the misclassification error by the RKHS classifier has an exponential order in sample size. Technically, the proof is based on the following points: (i) connecting the DH condition and a margin of classifiers, and (ii) handling metric entropy of functional data. Experimentally, we validate that the DH condition and the associated margin condition have a certain impact on the convergence rate of the RKHS classifier. We also find that some of the other classifiers for functional data have a similar property.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
02/28/2021

Optimal Imperfect Classification for Gaussian Functional Data

Existing works on functional data classification focus on the constructi...
research
02/01/2013

Sparse Multiple Kernel Learning with Geometric Convergence Rate

In this paper, we study the problem of sparse multiple kernel learning (...
research
03/08/2021

A reproducing kernel Hilbert space framework for functional data classification

We encounter a bottleneck when we try to borrow the strength of classica...
research
07/19/2023

Repeated Observations for Classification

We study the problem nonparametric classification with repeated observat...
research
12/22/2015

Refined Error Bounds for Several Learning Algorithms

This article studies the achievable guarantees on the error rates of cer...
research
01/11/2018

Quantization/clustering: when does k-means work?

Though mostly used as a clustering algorithm, k-means are originally des...
research
01/11/2018

Quantization/clustering: when and why does k-means work?

Though mostly used as a clustering algorithm, k-means are originally des...

Please sign up or login with your details

Forgot password? Click here to reset