Benchmarking Quantized Neural Networks on FPGAs with FINN

02/02/2021
by   Quentin Ducasse, et al.
0

The ever-growing cost of both training and inference for state-of-the-art neural networks has brought literature to look upon ways to cut off resources used with a minimal impact on accuracy. Using lower precision comes at the cost of negligible loss in accuracy. While training neural networks may require a powerful setup, deploying a network must be possible on low-power and low-resource hardware architectures. Reconfigurable architectures have proven to be more powerful and flexible than GPUs when looking at a specific application. This article aims to assess the impact of mixed-precision when applied to neural networks deployed on FPGAs. While several frameworks exist that create tools to deploy neural networks using reduced-precision, few of them assess the importance of quantization and the framework quality. FINN and Brevitas, two frameworks from Xilinx labs, are used to assess the impact of quantization on neural networks using 2 to 8 bit precisions and weights with several parallelization configurations. Equivalent accuracy can be obtained using lower-precision representation and enough training. However, the compressed network can be better parallelized allowing the deployed network throughput to be 62 times faster. The benchmark set up in this work is available in a public repository (https://github.com/QDucasse/nn benchmark).

READ FULL TEXT

page 1

page 4

page 5

research
04/30/2021

PositNN: Training Deep Neural Networks with Mixed Low-Precision Posit

Low-precision formats have proven to be an efficient way to reduce not o...
research
05/30/2022

FBM: Fast-Bit Allocation for Mixed-Precision Quantization

Quantized neural networks are well known for reducing latency, power con...
research
05/29/2019

Instant Quantization of Neural Networks using Monte Carlo Methods

Low bit-width integer weights and activations are very important for eff...
research
06/07/2017

Training Quantized Nets: A Deeper Understanding

Currently, deep neural networks are deployed on low-power portable devic...
research
11/01/2017

Minimum Energy Quantized Neural Networks

This work targets the automated minimum-energy optimization of Quantized...
research
08/11/2022

Mixed-Precision Neural Networks: A Survey

Mixed-precision Deep Neural Networks achieve the energy efficiency and t...
research
04/10/2020

Exposing Hardware Building Blocks to Machine Learning Frameworks

There are a plethora of applications that demand high throughput and low...

Please sign up or login with your details

Forgot password? Click here to reset