VIBNN: Hardware Acceleration of Bayesian Neural Networks

02/02/2018
by   Ruizhe Cai, et al.
0

Bayesian Neural Networks (BNNs) have been proposed to address the problem of model uncertainty in training and inference. By introducing weights associated with conditioned probability distributions, BNNs are capable of resolving the overfitting issue commonly seen in conventional neural networks and allow for small-data training, through the variational inference process. Frequent usage of Gaussian random variables in this process requires a properly optimized Gaussian Random Number Generator (GRNG). The high hardware cost of conventional GRNG makes the hardware implementation of BNNs challenging. In this paper, we propose VIBNN, an FPGA-based hardware accelerator design for variational inference on BNNs. We explore the design space for massive amount of Gaussian variable sampling tasks in BNNs. Specifically, we introduce two high performance Gaussian (pseudo) random number generators: the RAM-based Linear Feedback Gaussian Random Number Generator (RLF-GRNG), which is inspired by the properties of binomial distribution and linear feedback logics; and the Bayesian Neural Network-oriented Wallace Gaussian Random Number Generator. To achieve high scalability and efficient memory access, we propose a deep pipelined accelerator architecture with fast execution and good hardware utilization. Experimental results demonstrate that the proposed VIBNN implementations on an FPGA can achieve throughput of 321,543.4 Images/s and energy efficiency upto 52,694.8 Images/J while maintaining similar accuracy as its software counterpart.

READ FULL TEXT

page 3

page 4

page 6

page 7

page 8

page 10

research
12/28/2021

FPGA Based Accelerator for Neural Networks Computation with Flexible Pipelining

FPGA is appropriate for fix-point neural networks computing due to high ...
research
05/12/2021

High-Performance FPGA-based Accelerator for Bayesian Neural Networks

Neural networks (NNs) have demonstrated their potential in a wide range ...
research
07/11/2018

FINN-L: Library Extensions and Design Trade-off Analysis for Variable Precision LSTM Networks on FPGAs

It is well known that many types of artificial neural networks, includin...
research
09/10/2019

High-performance Cryptographically Secure Pseudo-random Number Generation via Bitslicing

In this paper, a high-throughput Cryptographically Secure Pseudo-Random ...
research
06/13/2023

Variational Positive-incentive Noise: How Noise Benefits Models

A large number of works aim to alleviate the impact of noise due to an u...
research
01/29/2020

Bayesian Reasoning with Deep-Learned Knowledge

We access the internalized understanding of trained, deep neural network...
research
08/10/2021

Binary Complex Neural Network Acceleration on FPGA

Being able to learn from complex data with phase information is imperati...

Please sign up or login with your details

Forgot password? Click here to reset