Accelerating Inference: towards a full Language, Compiler and Hardware stack

12/12/2012
by   Shawn Hershey, et al.
0

We introduce Dimple, a fully open-source API for probabilistic modeling. Dimple allows the user to specify probabilistic models in the form of graphical models, Bayesian networks, or factor graphs, and performs inference (by automatically deriving an inference engine from a variety of algorithms) on the model. Dimple also serves as a compiler for GP5, a hardware accelerator for inference.

READ FULL TEXT
research
05/08/2019

Research Note: An Open Source Bluespec Compiler

In this Research Note we report on an open-source compiler for the Blues...
research
12/12/2013

Augur: a Modeling Language for Data-Parallel Probabilistic Inference

It is time-consuming and error-prone to implement inference procedures f...
research
03/05/2020

Compiling Neural Networks for a Computational Memory Accelerator

Computational memory (CM) is a promising approach for accelerating infer...
research
09/12/2022

BayesLDM: A Domain-Specific Language for Probabilistic Modeling of Longitudinal Data

In this paper we present BayesLDM, a system for Bayesian longitudinal da...
research
06/02/2020

Vyasa: A High-Performance Vectorizing Compiler for Tensor Convolutions on the Xilinx AI Engine

Xilinx's AI Engine is a recent industry example of energy-efficient vect...
research
04/05/2018

Hypertree Decompositions Revisited for PGMs

We revisit the classical problem of exact inference on probabilistic gra...
research
08/26/2022

An Open-Source P416 Compiler Backend for Reconfigurable Match-Action Table Switches

The P4 language has become the dominant choice for programming the recon...

Please sign up or login with your details

Forgot password? Click here to reset