Statistical inference in massive datasets by empirical likelihood

04/18/2020
by   Xuejun Ma, et al.
0

In this paper, we propose a new statistical inference method for massive data sets, which is very simple and efficient by combining divide-and-conquer method and empirical likelihood. Compared with two popular methods (the bag of little bootstrap and the subsampled double bootstrap), we make full use of data sets, and reduce the computation burden. Extensive numerical studies and real data analysis demonstrate the effectiveness and flexibility of our proposed method. Furthermore, the asymptotic property of our method is derived.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
04/09/2015

Robust, scalable and fast bootstrap method for analyzing large scale data

In this paper we address the problem of performing statistical inference...
research
07/13/2023

Scalable Resampling in Massive Generalized Linear Models via Subsampled Residual Bootstrap

Residual bootstrap is a classical method for statistical inference in re...
research
05/29/2018

Distributed Statistical Inference for Massive Data

This paper considers distributed statistical inference for general symme...
research
08/08/2021

Online Bootstrap Inference For Policy Evaluation in Reinforcement Learning

The recent emergence of reinforcement learning has created a demand for ...
research
02/16/2022

Equivariant Passing-Bablok regression in quasilinear time

Passing-Bablok regression is a standard tool for method and assay compar...
research
10/03/2021

A Sequential Addressing Subsampling Method for Massive Data Analysis under Memory Constraint

The emergence of massive data in recent years brings challenges to autom...
research
09/22/2017

Generalized Bayesian Updating and the Loss-Likelihood Bootstrap

In this paper, we revisit the weighted likelihood bootstrap and show tha...

Please sign up or login with your details

Forgot password? Click here to reset