DeepAI AI Chat
Log In Sign Up

A hypothesis-testing perspective on the G-normal distribution theory

by   Shige Peng, et al.

The G-normal distribution was introduced by Peng [2007] as the limiting distribution in the central limit theorem for sublinear expectation spaces. Equivalently, it can be interpreted as the solution to a stochastic control problem where we have a sequence of random variables, whose variances can be chosen based on all past information. In this note we study the tail behavior of the G-normal distribution through analyzing a nonlinear heat equation. Asymptotic results are provided so that the tail "probabilities" can be easily evaluated with high accuracy. This study also has a significant impact on the hypothesis testing theory for heteroscedastic data; we show that even if the data are generated under the null hypothesis, it is possible to cheat and attain statistical significance by sequentially manipulating the error variances of the observations.


page 1

page 2

page 3

page 4


Testing Homogeneity for Normal Mixture Models: Variational Bayes Approach

The test of homogeneity for normal mixtures has been conducted in divers...

Distributional Null Hypothesis Testing with the T distribution

Null Hypothesis Significance Testing (NHST) has long been central to the...

Minimum Probability of Error of List M-ary Hypothesis Testing

We study a variation of Bayesian M-ary hypothesis testing in which the t...

Distributed Hypothesis Testing with Privacy Constraints

We revisit the distributed hypothesis testing (or hypothesis testing wit...

Discrete convolution statistic for hypothesis testing

The question of testing for equality in distribution between two linear ...

Tail behavior of dependent V-statistics and its applications

We establish exponential inequalities and Cramer-type moderate deviation...