Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs

10/21/2019
by   Yiren Zhao, et al.
0

Modern deep Convolutional Neural Networks (CNNs) are computationally demanding, yet real applications often require high throughput and low latency. To help tackle these problems, we propose Tomato, a framework designed to automate the process of generating efficient CNN accelerators. The generated design is pipelined and each convolution layer uses different arithmetics at various precisions. Using Tomato, we showcase state-of-the-art multi-precision multi-arithmetic networks, including MobileNet-V1, running on FPGAs. To our knowledge, this is the first multi-precision multi-arithmetic auto-generation framework for CNNs. In software, Tomato fine-tunes pretrained networks to use a mixture of short powers-of-2 and fixed-point weights with a minimal loss in classification accuracy. The fine-tuned parameters are combined with the templated hardware designs to automatically produce efficient inference circuits in FPGAs. We demonstrate how our approach significantly reduces model sizes and computation complexities, and permits us to pack a complete ImageNet network onto a single FPGA without accessing off-chip memories for the first time. Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs. To the best of our knowledge, our automatically generated accelerators outperform closest FPGA-based competitors by at least 2-4x for lantency and throughput; the generated accelerator runs ImageNet classification at a rate of more than 3000 frames per second.

READ FULL TEXT

page 1

page 8

research
07/19/2020

NeuroMAX: A High Throughput, Multi-Threaded, Log-Based Accelerator for Convolutional Neural Networks

Convolutional neural networks (CNNs) require high throughput hardware ac...
research
03/28/2018

Synergy: A HW/SW Framework for High Throughput CNNs on Embedded Heterogeneous SoC

Convolutional Neural Networks (CNN) have been widely deployed in diverse...
research
02/09/2020

FastWave: Accelerating Autoregressive Convolutional Neural Networks on FPGA

Autoregressive convolutional neural networks (CNNs) have been widely exp...
research
11/14/2020

Channel Tiling for Improved Performance and Accuracy of Optical Neural Network Accelerators

Low latency, high throughput inference on Convolution Neural Networks (C...
research
08/08/2017

Snowflake: A Model Agnostic Accelerator for Deep Convolutional Neural Networks

Deep convolutional neural networks (CNNs) are the deep learning model of...
research
07/13/2018

CascadeCNN: Pushing the Performance Limits of Quantisation in Convolutional Neural Networks

This work presents CascadeCNN, an automated toolflow that pushes the qua...
research
12/02/2020

DYNAMAP: Dynamic Algorithm Mapping Framework for Low Latency CNN Inference

Most of the existing works on FPGA acceleration of Convolutional Neural ...

Please sign up or login with your details

Forgot password? Click here to reset