Beyond Application End-Point Results: Quantifying Statistical Robustness of MCMC Accelerators

03/05/2020
by   Xiangyu Zhang, et al.
0

Statistical machine learning often uses probabilistic algorithms, such as Markov Chain Monte Carlo (MCMC), to solve a wide range of problems. Probabilistic computations, often considered too slow on conventional processors, can be accelerated with specialized hardware by exploiting parallelism and optimizing the design using various approximation techniques. Current methodologies for evaluating correctness of probabilistic accelerators are often incomplete, mostly focusing only on end-point result quality ("accuracy"). It is important for hardware designers and domain experts to look beyond end-point "accuracy" and be aware of the hardware optimizations impact on other statistical properties. This work takes a first step towards defining metrics and a methodology for quantitatively evaluating correctness of probabilistic accelerators beyond end-point result quality. We propose three pillars of statistical robustness: 1) sampling quality, 2) convergence diagnostic, and 3) goodness of fit. We apply our framework to a representative MCMC accelerator and surface design issues that cannot be exposed using only application end-point result quality. Applying the framework to guide design space exploration shows that statistical robustness comparable to floating-point software can be achieved by slightly increasing the bit representation, without floating-point hardware requirements.

READ FULL TEXT

page 1

page 2

page 3

page 7

page 8

page 9

research
10/27/2019

A Case for Quantifying Statistical Robustness of Specialized Probabilistic AI Accelerators

Statistical machine learning often uses probabilistic algorithms, such a...
research
04/04/2018

Training DNNs with Hybrid Block Floating Point

The wide adoption of DNNs has given birth to unrelenting computing requi...
research
08/02/2021

Accelerating Markov Random Field Inference with Uncertainty Quantification

Statistical machine learning has widespread application in various domai...
research
02/23/2023

From Circuits to SoC Processors: Arithmetic Approximation Techniques Embedded Computing Methodologies for DSP Acceleration

The computing industry is forced to find alternative design approaches a...
research
10/17/2021

Correct Probabilistic Model Checking with Floating-Point Arithmetic

Probabilistic model checking computes probabilities and expected values ...
research
02/26/2021

A Variable Vector Length SIMD Architecture for HW/SW Co-designed Processors

Hardware/Software (HW/SW) co-designed processors provide a promising sol...

Please sign up or login with your details

Forgot password? Click here to reset