Throughput Optimizations for FPGA-based Deep Neural Network Inference

09/28/2018
by   Thorbjörn Posewsky, et al.
0

Deep neural networks are an extremely successful and widely used technique for various pattern recognition and machine learning tasks. Due to power and resource constraints, these computationally intensive networks are difficult to implement in embedded systems. Yet, the number of applications that can benefit from the mentioned possibilities is rapidly rising. In this paper, we propose novel architectures for the inference of previously learned and arbitrary deep neural networks on FPGA-based SoCs that are able to overcome these limitations. Our key contributions include the reuse of previously transferred weight matrices across multiple input samples, which we refer to as batch processing, and the usage of compressed weight matrices, also known as pruning. An extensive evaluation of these optimizations is presented. Both techniques allow a significant mitigation of data transfers and speed-up the network inference by one order of magnitude. At the same time, we surpass the data throughput of fully-featured x86-based systems while only using a fraction of their energy consumption.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
03/24/2016

A Reconfigurable Low Power High Throughput Architecture for Deep Network Training

General purpose computing systems are used for a large variety of applic...
research
01/20/2021

SparseDNN: Fast Sparse Deep Learning Inference on CPUs

The last few years have seen gigantic leaps in algorithms and systems to...
research
02/04/2016

FPGA Based Implementation of Deep Neural Networks Using On-chip Memory Only

Deep neural networks (DNNs) demand a very large amount of computation an...
research
06/05/2017

DeepIoT: Compressing Deep Neural Network Structures for Sensing Systems with a Compressor-Critic Framework

Recent advances in deep learning motivate the use of deep neutral networ...
research
05/27/2018

Compact and Computationally Efficient Representation of Deep Neural Networks

Dot product operations between matrices are at the heart of almost any f...
research
05/05/2019

Zygarde: Time-Sensitive On-Device Deep Intelligence on Intermittently-Powered Systems

In this paper, we propose a time-, energy-, and accuracy-aware schedulin...
research
09/19/2019

DeepView: Visualizing the behavior of deep neural networks in a part of the data space

Machine learning models using deep architectures have been able to imple...

Please sign up or login with your details

Forgot password? Click here to reset