DeepAI AI Chat
Log In Sign Up

Statistical inference in massive datasets by empirical likelihood

by   Xuejun Ma, et al.
National University of Singapore

In this paper, we propose a new statistical inference method for massive data sets, which is very simple and efficient by combining divide-and-conquer method and empirical likelihood. Compared with two popular methods (the bag of little bootstrap and the subsampled double bootstrap), we make full use of data sets, and reduce the computation burden. Extensive numerical studies and real data analysis demonstrate the effectiveness and flexibility of our proposed method. Furthermore, the asymptotic property of our method is derived.


page 1

page 2

page 3

page 4


Robust, scalable and fast bootstrap method for analyzing large scale data

In this paper we address the problem of performing statistical inference...

Scalable Resampling in Massive Generalized Linear Models via Subsampled Residual Bootstrap

Residual bootstrap is a classical method for statistical inference in re...

Distributed Statistical Inference for Massive Data

This paper considers distributed statistical inference for general symme...

Online Bootstrap Inference For Policy Evaluation in Reinforcement Learning

The recent emergence of reinforcement learning has created a demand for ...

Equivariant Passing-Bablok regression in quasilinear time

Passing-Bablok regression is a standard tool for method and assay compar...

A Sequential Addressing Subsampling Method for Massive Data Analysis under Memory Constraint

The emergence of massive data in recent years brings challenges to autom...

Generalized Bayesian Updating and the Loss-Likelihood Bootstrap

In this paper, we revisit the weighted likelihood bootstrap and show tha...