Algorithm and Hardware Co-Design of Energy-Efficient LSTM Networks for Video Recognition with Hierarchical Tucker Tensor Decomposition

12/05/2022
by   Yu Gong, et al.
18

Long short-term memory (LSTM) is a type of powerful deep neural network that has been widely used in many sequence analysis and modeling applications. However, the large model size problem of LSTM networks make their practical deployment still very challenging, especially for the video recognition tasks that require high-dimensional input data. Aiming to overcome this limitation and fully unlock the potentials of LSTM models, in this paper we propose to perform algorithm and hardware co-design towards high-performance energy-efficient LSTM networks. At algorithm level, we propose to develop fully decomposed hierarchical Tucker (FDHT) structure-based LSTM, namely FDHT-LSTM, which enjoys ultra-low model complexity while still achieving high accuracy. In order to fully reap such attractive algorithmic benefit, we further develop the corresponding customized hardware architecture to support the efficient execution of the proposed FDHT-LSTM model. With the delicate design of memory access scheme, the complicated matrix transformation can be efficiently supported by the underlying hardware without any access conflict in an on-the-fly way. Our evaluation results show that both the proposed ultra-compact FDHT-LSTM models and the corresponding hardware accelerator achieve very high performance. Compared with the state-of-the-art compressed LSTM models, FDHT-LSTM enjoys both order-of-magnitude reduction in model size and significant accuracy improvement across different video recognition datasets. Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieves better performance in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload. For LSTM-UCF workload, our proposed design also outperforms TIE with higher throughput, higher energy efficiency and comparable area efficiency.

READ FULL TEXT

page 1

page 6

page 8

page 9

page 13

research
12/01/2016

ESE: Efficient Speech Recognition Engine with Sparse LSTM on FPGA

Long Short-Term Memory (LSTM) is widely used in speech recognition. In o...
research
10/16/2022

Data-Model-Circuit Tri-Design for Ultra-Light Video Intelligence on Edge Devices

In this paper, we propose a data-model-hardware tri-design framework for...
research
10/19/2019

ELSA: A Throughput-Optimized Design of an LSTM Accelerator for Energy-Constrained Devices

The next significant step in the evolution and proliferation of artifici...
research
09/02/2023

Accelerating LSTM-based High-Rate Dynamic System Models

In this paper, we evaluate the use of a trained Long Short-Term Memory (...
research
08/04/2021

Spartus: A 9.4 TOp/s FPGA-based LSTM Accelerator Exploiting Spatio-temporal Sparsity

Long Short-Term Memory (LSTM) recurrent networks are frequently used for...
research
01/30/2019

Hardware-Guided Symbiotic Training for Compact, Accurate, yet Execution-Efficient LSTM

Many long short-term memory (LSTM) applications need fast yet compact mo...
research
05/31/2019

Design Light-weight 3D Convolutional Networks for Video Recognition Temporal Residual, Fully Separable Block, and Fast Algorithm

Deep 3-dimensional (3D) Convolutional Network (ConvNet) has shown promis...

Please sign up or login with your details

Forgot password? Click here to reset