On Statistical Efficiency in Learning

12/24/2020
by   Jie Ding, et al.
0

A central issue of many statistical learning problems is to select an appropriate model from a set of candidate models. Large models tend to inflate the variance (or overfitting), while small models tend to cause biases (or underfitting) for a given fixed dataset. In this work, we address the critical challenge of model selection to strike a balance between model fitting and model complexity, thus gaining reliable predictive power. We consider the task of approaching the theoretical limit of statistical learning, meaning that the selected model has the predictive performance that is as good as the best possible model given a class of potentially misspecified candidate models. We propose a generalized notion of Takeuchi's information criterion and prove that the proposed method can asymptotically achieve the optimal out-sample prediction loss under reasonable assumptions. It is the first proof of the asymptotic property of Takeuchi's information criterion to our best knowledge. Our proof applies to a wide variety of nonlinear models, loss functions, and high dimensionality (in the sense that the models' complexity can grow with sample size). The proposed method can be used as a computationally efficient surrogate for leave-one-out cross-validation. Moreover, for modeling streaming data, we propose an online algorithm that sequentially expands the model complexity to enhance selection stability and reduce computation cost. Experimental studies show that the proposed method has desirable predictive power and significantly less computational cost than some popular methods.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/19/2015

Information-based inference for singular models and finite sample sizes

A central problem in statistics is model selection, the choice between c...
research
12/05/2022

Multifold Cross-Validation Model Averaging for Generalized Additive Partial Linear Models

Generalized additive partial linear models (GAPLMs) are appealing for mo...
research
06/13/2022

Posterior covariance information criterion for arbitrary loss functions

We propose a novel computationally low-cost method for estimating the pr...
research
09/14/2021

Targeted Cross-Validation

In many applications, we have access to the complete dataset but are onl...
research
09/12/2018

Prediction out-of-sample using block shrinkage estimators: model selection and predictive inference

In a linear regression model with random design, we consider a family of...
research
03/14/2023

Optimal Sampling Designs for Multi-dimensional Streaming Time Series with Application to Power Grid Sensor Data

The Internet of Things (IoT) system generates massive high-speed tempora...
research
08/16/2019

Selection of Exponential-Family Random Graph Models via Held-Out Predictive Evaluation (HOPE)

Statistical models for networks with complex dependencies pose particula...

Please sign up or login with your details

Forgot password? Click here to reset