Adaptive Noisy Data Augmentation for Regularized Estimation and Inference in Generalized Linear Models

04/18/2022
by   Yinan Li, et al.
0

We propose the AdaPtive Noise Augmentation (PANDA) procedure to regularize the estimation and inference of generalized linear models (GLMs). PANDA iteratively optimizes the objective function given noise augmented data until convergence to obtain the regularized model estimates. The augmented noises are designed to achieve various regularization effects, including l_0, bridge (lasso and ridge included), elastic net, adaptive lasso, and SCAD, as well as group lasso and fused ridge. We examine the tail bound of the noise-augmented loss function and establish the almost sure convergence of the noise-augmented loss function and its minimizer to the expected penalized loss function and its minimizer, respectively. We derive the asymptotic distributions for the regularized parameters, based on which, inferences can be obtained simultaneously with variable selection. PANDA exhibits ensemble learning behaviors that help further decrease the generalization error. Computationally, PANDA is easy to code, leveraging existing software for implementing GLMs, without resorting to complicated optimization techniques. We demonstrate the superior or similar performance of PANDA against the existing approaches of the same type of regularizers in simulated and real-life data. We show that the inferences through PANDA achieve nominal or near-nominal coverage and are far more efficient compared to a popular existing post-selection procedure.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/11/2018

Panda: AdaPtive Noisy Data Augmentation for Regularization of Undirected Graphical Models

We propose PANDA, an AdaPtive Noise Augmentation technique to regularize...
research
10/19/2018

AdaPtive Noisy Data Augmentation (PANDA) for Simultaneous Construction Multiple Graph Models

We extend the data augmentation technique (PANDA) by Li et al. (2018) fo...
research
12/23/2019

MM for Penalized Estimation

Penalized estimation can conduct variable selection and parameter estima...
research
03/28/2022

A Comparison of Hamming Errors of Representative Variable Selection Methods

Lasso is a celebrated method for variable selection in linear models, bu...
research
05/09/2015

Estimation with Norm Regularization

Analysis of non-asymptotic estimation error and structured statistical r...
research
01/10/2014

Lasso and equivalent quadratic penalized models

The least absolute shrinkage and selection operator (lasso) and ridge re...
research
04/16/2023

Penalized Likelihood Inference with Survey Data

This paper extends three Lasso inferential methods, Debiased Lasso, C(α)...

Please sign up or login with your details

Forgot password? Click here to reset