DeepAI
Log In Sign Up

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

10/24/2019
by   Erwei Wang, et al.
0

Research has shown that deep neural networks contain significant redundancy, and thus that high classification accuracy can be achieved even when weights and activations are quantised down to binary values. Network binarisation on FPGAs greatly increases area efficiency by replacing resource-hungry multipliers with lightweight XNOR gates. However, an FPGA's fundamental building block, the K-LUT, is capable of implementing far more than an XNOR: it can perform any K-input Boolean operation. Inspired by this observation, we propose LUTNet, an end-to-end hardware-software framework for the construction of area-efficient FPGA-based neural network accelerators using the native LUTs as inference operators. We describe the realisation of both unrolled and tiled LUTNet architectures, with the latter facilitating smaller, less power-hungry deployment over the former while sacrificing area and energy efficiency along with throughput. For both varieties, we demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving comparable accuracy. Against the state-of-the-art binarised neural network implementation, we achieve up to twice the area efficiency for several standard network models when inferencing popular datasets. We also demonstrate that even greater energy efficiency improvements are obtainable.

READ FULL TEXT
04/01/2019

LUTNet: Rethinking Inference in FPGA Soft Logic

Research has shown that deep neural networks contain significant redunda...
10/04/2018

Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA

Binarized Neural Network (BNN) removes bitwidth redundancy in classical ...
11/27/2019

QubitHD: A Stochastic Acceleration Method for HD Computing-Based Machine Learning

Machine Learning algorithms based on Brain-inspired Hyperdimensional (HD...
12/04/2021

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

FPGA-specific DNN architectures using the native LUTs as independently t...
12/01/2016

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Research has shown that convolutional neural networks contain significan...
09/01/2016

Ternary Neural Networks for Resource-Efficient AI Applications

The computation and storage requirements for Deep Neural Networks (DNNs)...
07/23/2021

Teaching a neural network with non-tunable exciton-polariton nodes

In contrast to software simulations of neural networks, hardware or neur...