RAPIDNN: In-Memory Deep Neural Network Acceleration Framework

06/15/2018
by   Mohsen Imani, et al.
0

Deep neural networks (DNN) have demonstrated effectiveness for various applications such as image processing, video segmentation, and speech recognition. Running state-of-the-art DNNs on current systems mostly relies on either general purpose processors, ASIC designs, or FPGA accelerators, all of which suffer from data movements due to the limited on chip memory and data transfer bandwidth. In this work, we propose a novel framework, called RAPIDNN, which processes all DNN operations within the memory to minimize the cost of data movement. To enable in-memory processing, RAPIDNN reinterprets a DNN model and maps it into a specialized accelerator, which is designed using non-volatile memory blocks that model four fundamental DNN operations, i.e., multiplication, addition, activation functions, and pooling. The framework extracts representative operands of a DNN model, e.g., weights and input values, using clustering methods to optimize the model for in-memory processing. Then, it maps the extracted operands and their precomputed results into the accelerator memory blocks. At runtime, the accelerator identifies computation results based on efficient in-memory search capability which also provides tunability of approximation to further improve computation efficiency. Our evaluation shows that RAPIDNN achieves 382.6x, 13.4x energy improvement and 211.5x, 5.6x performance speedup as compared to GPU-based DNN and the state-of-the-art DNN accelerator, while ensuring less than 0.3 loss.

READ FULL TEXT

page 11

page 12

research
09/15/2019

TiM-DNN: Ternary in-Memory accelerator for Deep Neural Networks

The use of lower precision has emerged as a popular technique to optimiz...
research
05/23/2023

NeuralMatrix: Moving Entire Neural Networks to General Matrix Multiplication for Efficient Inference

In this study, we introduce NeuralMatrix, a novel framework that enables...
research
08/25/2021

Towards Memory-Efficient Neural Networks via Multi-Level in situ Generation

Deep neural networks (DNN) have shown superior performance in a variety ...
research
07/11/2018

Medusa: A Scalable Interconnect for Many-Port DNN Accelerators and Wide DRAM Controller Interfaces

To cope with the increasing demand and computational intensity of deep n...
research
10/07/2021

Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving

Bayesian Neural Networks (BNNs) that possess a property of uncertainty e...
research
09/26/2022

FastStamp: Accelerating Neural Steganography and Digital Watermarking of Images on FPGAs

Steganography and digital watermarking are the tasks of hiding recoverab...
research
03/23/2016

Acceleration of Deep Neural Network Training with Resistive Cross-Point Devices

In recent years, deep neural networks (DNN) have demonstrated significan...

Please sign up or login with your details

Forgot password? Click here to reset