A Note on Taylor's Expansion and Mean Value Theorem With Respect to a Random Variable

02/20/2021 ∙ by Yifan Yang, et al. ∙ 0

We introduce a stochastic version of Taylor's expansion and Mean Value Theorem, originally proved by Aliprantis and Border (1999), and extend them to a multivariate case. For a univariate case, the theorem asserts that "suppose a real-valued function f has a continuous derivative f' on a closed interval I and X is a random variable on a probability space (Ω, ℱ, P). Fix a ∈ I, there exists a random variable ξ such that ξ(ω) ∈ I for every ω∈Ω and f(X(ω)) = f(a) + f'(ξ(ω))(X(ω) - a)." The proof is not trivial. By applying these results in statistics, one may simplify some details in the proofs of the Delta method or the asymptotic properties for a maximum likelihood estimator. In particular, when mentioning "there exists θ ^ * between θ̂ (a maximum likelihood estimator) and θ_0 (the true value)", a stochastic version of Mean Value Theorem guarantees θ ^ * is a random variable (or a random vector).

READ FULL TEXT
POST COMMENT

Comments

There are no comments yet.

Authors

page 1

page 2

page 3

page 4

This week in AI

Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.