Full-stack Optimization for Accelerating CNNs with FPGA Validation

05/01/2019
by   Bradley McDanel, et al.
0

We present a full-stack optimization framework for accelerating inference of CNNs (Convolutional Neural Networks) and validate the approach with field-programmable gate arrays (FPGA) implementations. By jointly optimizing CNN models, computing architectures, and hardware implementations, our full-stack approach achieves unprecedented performance in the trade-off space characterized by inference latency, energy efficiency, hardware utilization and inference accuracy. As a validation vehicle, we have implemented a 170MHz FPGA inference chip achieving 2.28ms latency for the ImageNet benchmark. The achieved latency is among the lowest reported in the literature while achieving comparable accuracy. However, our chip shines in that it has 9x higher energy efficiency compared to other implementations achieving comparable latency. A highlight of our full-stack approach which attributes to the achieved high energy efficiency is an efficient Selector-Accumulator (SAC) architecture for implementing the multiplier-accumulator (MAC) operation present in any digital CNN hardware. For instance, compared to a FPGA implementation for a traditional 8-bit MAC, SAC substantially reduces required hardware resources (4.85x fewer Look-up Tables) and power consumption (2.48x).

READ FULL TEXT

page 1

page 2

page 3

page 4

research
01/17/2023

An Energy-Efficient Reconfigurable Autoencoder Implementation on FPGA

Autoencoders are unsupervised neural networks that are used to process a...
research
02/27/2020

MajorityNets: BNNs Utilising Approximate Popcount for Improved Efficiency

Binarized neural networks (BNNs) have shown exciting potential for utili...
research
04/24/2023

Design optimization for high-performance computing using FPGA

Reconfigurable architectures like Field Programmable Gate Arrays (FPGAs)...
research
11/07/2018

Packing Sparse Convolutional Neural Networks for Efficient Systolic Array Implementations: Column Combining Under Joint Optimization

This paper describes a novel approach of packing sparse convolutional ne...
research
01/17/2022

Low hardware consumption, resolution-configurable Gray code oscillator time-to-digital converters implemented in 16nm, 20nm and 28nm FPGAs

This paper presents a low hardware consumption, resolution-configurable,...
research
03/24/2020

Evolutionary Bin Packing for Memory-Efficient Dataflow Inference Acceleration on FPGA

Convolutional neural network (CNN) dataflow inference accelerators imple...
research
09/02/2022

PulseDL-II: A System-on-Chip Neural Network Accelerator for Timing and Energy Extraction of Nuclear Detector Signals

Front-end electronics equipped with high-speed digitizers are being used...

Please sign up or login with your details

Forgot password? Click here to reset