XNOR Neural Engine: a Hardware Accelerator IP for 21.6 fJ/op Binary Neural Network Inference

07/09/2018
by   Francesco Conti, et al.
10

Binary Neural Networks (BNNs) are promising to deliver accuracy comparable to conventional deep neural networks at a fraction of the cost in terms of memory and energy. In this paper, we introduce the XNOR Neural Engine (XNE), a fully digital configurable hardware accelerator IP for BNNs, integrated within a microcontroller unit (MCU) equipped with an autonomous I/O subsystem and hybrid SRAM / standard cell memory. The XNE is able to fully compute convolutional and dense layers in autonomy or in cooperation with the core in the MCU to realize more complex behaviors. We show post-synthesis results in 65nm and 22nm technology for the XNE IP and post-layout results in 22nm for the full MCU indicating that this system can drop the energy cost per binary operation to 21.6fJ per operation at 0.4V, and at the same time is flexible and performant enough to execute state-of-the-art BNN topologies such as ResNet-34 in less than 2.2mJ per frame at 8.9 fps.

READ FULL TEXT

page 4

page 6

page 11

research
02/04/2021

A 5 μW Standard Cell Memory-based Configurable Hyperdimensional Computing Accelerator for Always-on Smart Sensing

Hyperdimensional computing (HDC) is a brain-inspired computing paradigm ...
research
05/03/2020

Lupulus: A Flexible Hardware Accelerator for Neural Networks

Neural networks have become indispensable for a wide range of applicatio...
research
03/05/2020

Compiling Neural Networks for a Computational Memory Accelerator

Computational memory (CM) is a promising approach for accelerating infer...
research
05/12/2020

ChewBaccaNN: A Flexible 223 TOPS/W BNN Accelerator

Binary Neural Networks enable smart IoT devices, as they significantly r...
research
11/03/2020

CUTIE: Beyond PetaOp/s/W Ternary DNN Inference Acceleration with Better-than-Binary Energy Efficiency

We present a 3.1 POp/s/W fully digital hardware accelerator for ternary ...
research
11/03/2017

ResBinNet: Residual Binary Neural Network

Recent efforts on training light-weight binary neural networks offer pro...
research
11/18/2020

Larq Compute Engine: Design, Benchmark, and Deploy State-of-the-Art Binarized Neural Networks

We introduce Larq Compute Engine, the world's fastest Binarized Neural N...

Please sign up or login with your details

Forgot password? Click here to reset