Vau da muntanialas: Energy-efficient multi-die scalable acceleration of RNN inference

02/14/2022
by   Gianna Paulin, et al.
5

Recurrent neural networks such as Long Short-Term Memories (LSTMs) learn temporal dependencies by keeping an internal state, making them ideal for time-series problems such as speech recognition. However, the output-to-input feedback creates distinctive memory bandwidth and scalability challenges in designing accelerators for RNNs. We present Muntaniala, an RNN accelerator architecture for LSTM inference with a silicon-measured energy-efficiency of 3.25TOP/s/W and performance of 30.53GOP/s in UMC 65 nm technology. The scalable design of Muntaniala allows running large RNN models by combining multiple tiles in a systolic array. We keep all parameters stationary on every die in the array, drastically reducing the I/O communication to only loading new features and sharing partial results with other dies. For quantifying the overall system power, including I/O power, we built Vau da Muntanialas, to the best of our knowledge, the first demonstration of a systolic multi-chip-on-PCB array of RNN accelerator. Our multi-die prototype performs LSTM inference with 192 hidden states in 330μ s with a total system power of 9.0mW at 10MHz consuming 2.95μ J. Targeting the 8/16-bit quantization implemented in Muntaniala, we show a phoneme error rate (PER) drop of approximately 3 respect to floating-point (FP) on a 3L-384NH-123NI LSTM network on the TIMIT dataset.

READ FULL TEXT

page 1

page 8

page 9

page 11

page 14

research
11/15/2017

Chipmunk: A Systolically Scalable 0.9 mm^2, 3.08 Gop/s/mW @ 1.2 mW Accelerator for Near-Sensor Recurrent Neural Network Inference

Recurrent neural networks (RNNs) are state-of-the-art in voice awareness...
research
02/25/2020

Non-Volatile Memory Array Based Quantization- and Noise-Resilient LSTM Neural Networks

In cloud and edge computing models, it is important that compute devices...
research
08/31/2022

RecLight: A Recurrent Neural Network Accelerator with Integrated Silicon Photonics

Recurrent Neural Networks (RNNs) are used in applications that learn dep...
research
10/26/2020

RNNAccel: A Fusion Recurrent Neural Network Accelerator for Edge Intelligence

Many edge devices employ Recurrent Neural Networks (RNN) to enhance thei...
research
11/04/2019

LSTM-Sharp: An Adaptable, Energy-Efficient Hardware Accelerator for Long Short-Term Memory

The effectiveness of LSTM neural networks for popular tasks such as Auto...
research
08/28/2022

Bayesian Neural Network Language Modeling for Speech Recognition

State-of-the-art neural network language models (NNLMs) represented by l...
research
09/22/2020

E-BATCH: Energy-Efficient and High-Throughput RNN Batching

Recurrent Neural Network (RNN) inference exhibits low hardware utilizati...

Please sign up or login with your details

Forgot password? Click here to reset