Unrolling Ternary Neural Networks

09/09/2019
by   Stephen Tridgell, et al.
0

The computational complexity of neural networks for large scale or real-time applications necessitates hardware acceleration. Most approaches assume that the network architecture and parameters are unknown at design time, permitting usage in a large number of applications. This paper demonstrates, for the case where the neural network architecture and ternary weight values are known a priori, that extremely high throughput implementations of neural network inference can be made by customising the datapath and routing to remove unnecessary computations and data movement. This approach is ideally suited to FPGA implementations as a specialized implementation of a trained network improves efficiency while still retaining generality with the reconfigurability of an FPGA. A VGG style network with ternary weights and fixed point activations is implemented for the CIFAR10 dataset on Amazon's AWS F1 instance. This paper demonstrates how to remove 90 layers by exploiting sparsity and compile-time optimizations. The implementation in hardware achieves 90.9 +/- 0.1 second, with a latency of only 29 us, which is the fastest CNN inference implementation reported so far on an FPGA.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
08/29/2018

FPGA Implementation of Convolutional Neural Networks with Fixed-Point Calculations

Neural network-based methods for image processing are becoming widely us...
research
12/04/2018

Pre-Defined Sparse Neural Networks with Hardware Acceleration

Neural networks have proven to be extremely powerful tools for modern ar...
research
12/01/2016

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Research has shown that convolutional neural networks contain significan...
research
07/07/2023

BlendNet: Design and Optimization of a Neural Network-Based Inference Engine Blending Binary and Fixed-Point Convolutions

This paper presents BlendNet, a neural network architecture employing a ...
research
06/13/2020

RoadNet-RT: High Throughput CNN Architecture and SoC Design for Real-Time Road Segmentation

In recent years, convolutional neural network has gained popularity in m...
research
12/15/2020

Optimization Techniques to Improve Inference Performance of a Forward Propagating Neural Network on an FPGA

This paper describes an optimized implementation of a Forward Propagatin...
research
04/06/2020

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Deployment of deep neural networks for applications that require very hi...

Please sign up or login with your details

Forgot password? Click here to reset