Scaling Binarized Neural Networks on Reconfigurable Logic

01/12/2017
by   Nicholas J. Fraser, et al.
0

Binarized neural networks (BNNs) are gaining interest in the deep learning community due to their significantly lower computational and memory cost. They are particularly well suited to reconfigurable logic devices, which contain an abundance of fine-grained compute resources and can result in smaller, lower power implementations, or conversely in higher classification rates. Towards this end, the Finn framework was recently proposed for building fast and flexible field programmable gate array (FPGA) accelerators for BNNs. Finn utilized a novel set of optimizations that enable efficient mapping of BNNs to hardware and implemented fully connected, non-padded convolutional and pooling layers, with per-layer compute resources being tailored to user-provided throughput requirements. However, FINN was not evaluated on larger topologies due to the size of the chosen FPGA, and exhibited decreased accuracy due to lack of padding. In this paper, we improve upon Finn to show how padding can be employed on BNNs while still maintaining a 1-bit datapath and high accuracy. Based on this technique, we demonstrate numerous experiments to illustrate flexibility and scalability of the approach. In particular, we show that a large BNN requiring 1.2 billion operations per frame running on an ADM-PCIE-8K5 platform can classify images at 12 kFPS with 671 us latency while drawing less than 41 W board power and classifying CIFAR-10 images at 88.7 implementation of this network achieves 14.8 trillion operations per second. We believe this is the fastest classification rate reported to date on this benchmark at this level of accuracy.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
12/01/2016

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Research has shown that convolutional neural networks contain significan...
research
02/27/2020

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

Binarized neural networks (BNNs) have shown exciting potential for utili...
research
08/21/2021

Reconfigurable co-processor architecture with limited numerical precision to accelerate deep convolutional neural networks

Convolutional Neural Networks (CNNs) are widely used in deep learning ap...
research
04/07/2021

NullaNet Tiny: Ultra-low-latency DNN Inference Through Fixed-function Combinational Logic

While there is a large body of research on efficient processing of deep ...
research
12/31/2020

Accelerating ODE-Based Neural Networks on Low-Cost FPGAs

ODENet is a deep neural network architecture in which a stacking structu...
research
03/21/2017

A Holistic Approach for Optimizing DSP Block Utilization of a CNN implementation on FPGA

Deep Neural Networks are becoming the de-facto standard models for image...
research
12/04/2021

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

FPGA-specific DNN architectures using the native LUTs as independently t...

Please sign up or login with your details

Forgot password? Click here to reset