Rejoinder on: Minimal penalties and the slope heuristics: a survey

09/30/2019 ∙ by Sylvain Arlot, et al. ∙ 0

This text is the rejoinder following the discussion of a survey paper about minimal penalties and the slope heuristics (Arlot, 2019. Minimal penalties and the slope heuristics: a survey. Journal de la SFDS). While commenting on the remarks made by the discussants, it provides two new results about the slope heuristics for model selection among a collection of projection estimators in least-squares fixed-design regression. First, we prove that the slope heuristics works even when all models are significantly biased. Second, when the noise is Gaussian with a general dependence structure, we compute expectations of key quantities, showing that the slope heuristics certainly is valid in this setting also.

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.