LUTNet: Rethinking Inference in FPGA Soft Logic

04/01/2019
by   Erwei Wang, et al.
0

Research has shown that deep neural networks contain significant redundancy, and that high classification accuracies can be achieved even when weights and activations are quantised down to binary values. Network binarisation on FPGAs greatly increases area efficiency by replacing resource-hungry multipliers with lightweight XNOR gates. However, an FPGA's fundamental building block, the K-LUT, is capable of implementing far more than an XNOR: it can perform any K-input Boolean operation. Inspired by this observation, we propose LUTNet, an end-to-end hardware-software framework for the construction of area-efficient FPGA-based neural network accelerators using the native LUTs as inference operators. We demonstrate that the exploitation of LUT flexibility allows for far heavier pruning than possible in prior works, resulting in significant area savings while achieving comparable accuracy. Against the state-of-the-art binarised neural network implementation, we achieve twice the area efficiency for several standard network models when inferencing popular datasets. We also demonstrate that even greater energy efficiency improvements are obtainable.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
10/24/2019

LUTNet: Learning FPGA Configurations for Highly Efficient Neural Network Inference

Research has shown that deep neural networks contain significant redunda...
research
12/04/2021

Logic Shrinkage: Learned FPGA Netlist Sparsity for Efficient Neural Network Inference

FPGA-specific DNN architectures using the native LUTs as independently t...
research
12/01/2016

FINN: A Framework for Fast, Scalable Binarized Neural Network Inference

Research has shown that convolutional neural networks contain significan...
research
12/24/2017

A Survey of FPGA Based Neural Network Accelerator

Recent researches on neural network have shown great advantage in comput...
research
09/05/2023

PolyLUT: Learning Piecewise Polynomials for Ultra-Low Latency FPGA LUT-based Inference

Field-programmable gate arrays (FPGAs) are widely used to implement deep...
research
07/07/2023

BlendNet: Design and Optimization of a Neural Network-Based Inference Engine Blending Binary and Fixed-Point Convolutions

This paper presents BlendNet, a neural network architecture employing a ...

Please sign up or login with your details

Forgot password? Click here to reset