Convergence rates for smooth k-means change-point detection

02/21/2018
by   Aurélie Fischer, et al.
0

In this paper, we consider the estimation of a change-point for possibly high-dimensional data in a Gaussian model, using a k-means method. We prove that, up to a logarithmic term, this change-point estimator has a minimax rate of convergence. Then, considering the case of sparse data, with a Sobolev regularity, we propose a smoothing procedure based on Lepski's method and show that the resulting estimator attains the optimal rate of convergence. Our results are illustrated by some simulations. As the theoretical statement relying on Lepski's method depends on some unknown constant, practical strategies are suggested to perform an optimal smoothing.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
11/16/2020

Optimal multiple change-point detection for high-dimensional data

This manuscript makes two contributions to the field of change-point det...
research
12/01/2020

Anisotropic local constant smoothing for change-point regression function estimation

Understanding forest fire spread in any region of Canada is critical to ...
research
05/19/2020

Inference on the Change Point in High Dimensional Dynamic Graphical Models

We propose a new estimator for the change point parameter in a dynamic h...
research
05/30/2023

Robust mean change point testing in high-dimensional data with heavy tails

We study a mean change point testing problem for high-dimensional data, ...
research
07/20/2023

Change point estimation for a stochastic heat equation

We study a change point model based on a stochastic partial differential...
research
03/06/2019

Nonparametric Change Point Detection in Regression

This paper considers an important problem of change-point detection in r...
research
03/14/2018

Signal Processing and Piecewise Convex Estimation

Many problems on signal processing reduce to nonparametric function esti...

Please sign up or login with your details

Forgot password? Click here to reset