On statistics, computation and scalability

09/30/2013
by   Michael I. Jordan, et al.
0

How should statistical procedures be designed so as to be scalable computationally to the massive datasets that are increasingly the norm? When coupled with the requirement that an answer to an inferential question be delivered within a certain time budget, this question has significant repercussions for the field of statistics. With the goal of identifying "time-data tradeoffs," we investigate some of the statistical consequences of computational perspectives on scability, in particular divide-and-conquer methodology and hierarchies of convex relaxations.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/22/2018

Calibrating Model-Based Inferences and Decisions

As the frontiers of applied statistics progress through increasingly com...
research
05/29/2018

Distributed Statistical Inference for Massive Data

This paper considers distributed statistical inference for general symme...
research
05/14/2014

Anytime Computation of Cautious Consequences in Answer Set Programming

Query answering in Answer Set Programming (ASP) is usually solved by com...
research
02/13/2019

The Odds are Odd: A Statistical Test for Detecting Adversarial Examples

We investigate conditions under which test statistics exist that can rel...
research
08/12/2021

Scalable3-BO: Big Data meets HPC - A scalable asynchronous parallel high-dimensional Bayesian optimization framework on supercomputers

Bayesian optimization (BO) is a flexible and powerful framework that is ...
research
03/29/2018

Efficient First-Order Algorithms for Adaptive Signal Denoising

We consider the problem of discrete-time signal denoising, focusing on a...
research
07/16/2018

Group Invariance and Computational Sufficiency

Statistical sufficiency formalizes the notion of data reduction. In the ...

Please sign up or login with your details

Forgot password? Click here to reset