LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

04/06/2020
by   Yaman Umuroglu, et al.
0

Deployment of deep neural networks for applications that require very high throughput or extremely low latency is a severe computational challenge, further exacerbated by inefficiencies in mapping the computation to hardware. We present a novel method for designing neural network topologies that directly map to a highly efficient FPGA implementation. By exploiting the equivalence of artificial neurons with quantized inputs/outputs and truth tables, we can train quantized neural networks that can be directly converted to a netlist of truth tables, and subsequently deployed as a highly pipelinable, massively parallel FPGA circuit. However, the neural network topology requires careful consideration since the hardware cost of truth tables grows exponentially with neuron fan-in. To obtain smaller networks where the whole netlist can be placed-and-routed onto a single FPGA, we derive a fan-in driven hardware cost model to guide topology design, and combine high sparsity with low-bit activation quantization to limit the neuron fan-in. We evaluate our approach on two tasks with very high intrinsic throughput requirements in high-energy physics and network intrusion detection. We show that the combination of sparsity and low-bit activation quantization results in high-speed circuits with small logic depth and low LUT cost, demonstrating competitive accuracy with less than 15 ns of inference latency and throughput in the hundreds of millions of inferences per second.

READ FULL TEXT

page 1

page 3

page 4

research
04/10/2020

Exposing Hardware Building Blocks to Machine Learning Frameworks

There are a plethora of applications that demand high throughput and low...
research
12/28/2021

HiKonv: High Throughput Quantized Convolution With Novel Bit-wise Management and Computation

Quantization for Convolutional Neural Network (CNN) has shown significan...
research
07/24/2020

Dopant Network Processing Units: Towards Efficient Neural-network Emulators with High-capacity Nanoelectronic Nodes

The rapidly growing computational demands of deep neural networks requir...
research
07/22/2022

HiKonv: Maximizing the Throughput of Quantized Convolution With Novel Bit-wise Management and Computation

Quantization for CNN has shown significant progress with the intention o...
research
09/09/2019

Unrolling Ternary Neural Networks

The computational complexity of neural networks for large scale or real-...
research
07/31/2018

Design Flow of Accelerating Hybrid Extremely Low Bit-width Neural Network in Embedded FPGA

Neural network accelerators with low latency and low energy consumption ...
research
03/05/2022

Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

Multiparty computation approaches to secure neural network inference tra...

Please sign up or login with your details

Forgot password? Click here to reset