Adaptive truncation of infinite sums: applications to Statistics

02/12/2022
by   Luiz Max Carvalho, et al.
0

It is often the case in Statistics that one needs to compute sums of infinite series, especially in marginalising over discrete latent variables. This has become more relevant with the popularization of gradient-based techniques (e.g. Hamiltonian Monte Carlo) in the Bayesian inference context, for which discrete latent variables are hard or impossible to deal with. For many commonly used infinite series, custom algorithms have been developed which exploit specific features of each problem. General techniques, suitable for a large class of problems with limited input from the user are less established. We employ basic results from the theory of infinite series to investigate general, problem-agnostic algorithms to truncate infinite sums within an arbitrary tolerance ε > 0 and provide robust computational implementations with provable guarantees. We compare three tentative solutions to estimating the infinite sum of interest: (i) a "naive" approach that sums terms until the terms are below the threshold ε; (ii) a `bounding pair' strategy based on trapping the true value between two partial sums; and (iii) a `batch' strategy that computes the partial sums in regular intervals and stops when their difference is less than ε. We show under which conditions each strategy guarantees the truncated sum is within the required tolerance and compare the error achieved by each approach, as well as the number of function evaluations necessary for each one. A detailed discussion of numerical issues in practical implementations is also provided. The paper provides some theoretical discussion of a variety of statistical applications, including raw and factorial moments and count models with observation error. Finally, detailed illustrations in the form noisy MCMC for Bayesian inference and maximum marginal likelihood estimation are presented.

READ FULL TEXT
research
11/08/2010

Efficient Bayesian Inference for Generalized Bradley-Terry Models

The Bradley-Terry model is a popular approach to describe probabilities ...
research
06/16/2022

Generalised Bayesian Inference for Discrete Intractable Likelihood

Discrete state spaces represent a major computational challenge to stati...
research
04/14/2022

Robust Bayesian inference in complex models with possibility theory

We propose a general solution to the problem of robust Bayesian inferenc...
research
11/19/2021

On Numerical Considerations for Riemannian Manifold Hamiltonian Monte Carlo

Riemannian manifold Hamiltonian Monte Carlo (RMHMC) is a sampling algori...
research
06/04/2013

Fast Gradient-Based Inference with Continuous Latent Variable Models in Auxiliary Form

We propose a technique for increasing the efficiency of gradient-based i...
research
03/07/2020

The NuZZ: Numerical ZigZag Sampling for General Models

We present the Numerical ZigZag (NuZZ) algorithm, a Piecewise Determinis...
research
02/03/2014

Efficient Gradient-Based Inference through Transformations between Bayes Nets and Neural Nets

Hierarchical Bayesian networks and neural networks with stochastic hidde...

Please sign up or login with your details

Forgot password? Click here to reset